Skip to content

Commit d1fdbe0

Browse files
committed
docs(models): clarify Gemma 3 classes, add Gemma 4 sample
- Update docstrings for Gemma, Gemma3Ollama, and GemmaFunctionCallingMixin to clarify they are Gemma 3-only - Add Gemma 4 usage guidance pointing to Gemini/LiteLlm classes - Add hello_world_gemma4 sample using standard Gemini class - Add header comments and READMEs to existing Gemma 3 samples - Add registry non-collision test for Gemma 4 model strings - Update registration comments in models/__init__.py
1 parent f973673 commit d1fdbe0

11 files changed

Lines changed: 271 additions & 33 deletions

File tree

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# Hello World — Gemma 3
2+
3+
This sample demonstrates using **Gemma 3** models with ADK via the `Gemma`
4+
class. The `Gemma` class provides workarounds for Gemma 3's lack of native
5+
function calling and system instruction support.
6+
7+
## When to use this
8+
9+
Use this approach for **Gemma 3 models only**. For Gemma 4 and later, use the
10+
standard `Gemini` class directly — see the
11+
[`hello_world_gemma4/`](../hello_world_gemma4/) sample.
12+
13+
## Running this sample
14+
15+
```bash
16+
# From the repository root
17+
adk run contributing/samples/hello_world_gemma
18+
19+
# Or via the web UI
20+
adk web contributing/samples
21+
```
22+
23+
## Related samples
24+
25+
- [`hello_world_gemma4/`](../hello_world_gemma4/) — Gemma 4 via standard
26+
Gemini class (recommended for Gemma 4+)
27+
- [`hello_world_gemma3_ollama/`](../hello_world_gemma3_ollama/) — Gemma 3 via
28+
Ollama

contributing/samples/hello_world_gemma/agent.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,9 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15+
# This sample uses the `Gemma` class, which provides workarounds for Gemma 3's
16+
# lack of native function calling. For Gemma 4+, use `Gemini` directly —
17+
# see the hello_world_gemma4/ sample.
1518

1619
import random
1720

contributing/samples/hello_world_gemma3_ollama/agent.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,10 @@
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
1414

15+
# This sample uses `Gemma3Ollama`, which provides workarounds for Gemma 3's
16+
# lack of native function calling on Ollama. For Gemma 4+ on Ollama,
17+
# use `LiteLlm` directly.
18+
1519
import logging
1620
import random
1721

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# Hello World — Gemma 4
2+
3+
This sample demonstrates using **Gemma 4** with ADK via the standard `Gemini`
4+
class. Gemma 4 supports native function calling and system instructions, so no
5+
special workaround classes are needed.
6+
7+
### Gemma 4 (this sample)
8+
9+
```python
10+
from google.adk.agents.llm_agent import Agent
11+
from google.adk.models.google_llm import Gemini
12+
13+
root_agent = Agent(
14+
model=Gemini(model="gemma-4-31b-it"), # gemma-4-26b-a4b-it or gemma-4-31b-it
15+
...
16+
)
17+
```
18+
19+
### Gemma 3
20+
21+
```python
22+
from google.adk.agents.llm_agent import Agent
23+
from google.adk.models.gemma_llm import Gemma
24+
25+
root_agent = Agent(
26+
model=Gemma(model="gemma-3-27b-it"),
27+
...
28+
)
29+
```
30+
31+
See the [`hello_world_gemma/`](../hello_world_gemma/) sample for the full
32+
Gemma 3 example.
33+
34+
## Why separate classes?
35+
36+
The `Gemma` and `Gemma3Ollama` classes exist because Gemma 3 lacks native
37+
function calling and system instruction support. They provide workarounds by:
38+
39+
- Injecting tool declarations into text prompts
40+
- Parsing function calls from model text responses
41+
- Converting system instructions to user-role messages
42+
43+
Gemma 4 doesn't need any of this — it works natively with the standard
44+
`Gemini` class (via Gemini API) and `LiteLlm` class (via other providers like
45+
Ollama).
46+
47+
## Running this sample
48+
49+
```bash
50+
# From the repository root
51+
adk run contributing/samples/hello_world_gemma4
52+
```
53+
54+
## Related samples
55+
56+
- [`hello_world_gemma/`](../hello_world_gemma/) — Gemma 3 via Gemini API
57+
- [`hello_world_gemma3_ollama/`](../hello_world_gemma3_ollama/) — Gemma 3 via
58+
Ollama
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
# Copyright 2026 Google LLC
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
16+
from . import agent
Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
# Copyright 2026 Google LLC
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
15+
# Gemma 4 sample — uses the standard `Gemini` class directly.
16+
# Gemma 4 supports native function calling and system instructions,
17+
# so no workaround classes are needed.
18+
# Compare with the hello_world_gemma/ sample (Gemma 3, requires workarounds).
19+
20+
import random
21+
22+
from google.adk.agents.llm_agent import Agent
23+
from google.adk.models.google_llm import Gemini
24+
25+
26+
def roll_die(sides: int) -> int:
27+
"""Roll a die and return the rolled result.
28+
29+
Args:
30+
sides: The integer number of sides the die has.
31+
32+
Returns:
33+
An integer of the result of rolling the die.
34+
"""
35+
return random.randint(1, sides)
36+
37+
38+
async def check_prime(nums: list[int]) -> str:
39+
"""Check if a given list of numbers are prime.
40+
41+
Args:
42+
nums: The list of numbers to check.
43+
44+
Returns:
45+
A str indicating which number is prime.
46+
"""
47+
primes = set()
48+
for number in nums:
49+
number = int(number)
50+
if number <= 1:
51+
continue
52+
is_prime = True
53+
for i in range(2, int(number**0.5) + 1):
54+
if number % i == 0:
55+
is_prime = False
56+
break
57+
if is_prime:
58+
primes.add(number)
59+
return (
60+
"No prime numbers found."
61+
if not primes
62+
else f"{', '.join(str(num) for num in primes)} are prime numbers."
63+
)
64+
65+
66+
root_agent = Agent(
67+
model=Gemini(model="gemma-4-31b-it"),
68+
name="data_processing_agent",
69+
description=(
70+
"Hello world agent using Gemma 4 via the standard Gemini class."
71+
),
72+
instruction="""\
73+
You roll dice and answer questions about the outcome of the dice rolls.
74+
You can roll dice of different sizes.
75+
You can use multiple tools in parallel by calling functions in parallel
76+
(in one request and in one round).
77+
It is ok to discuss previous dice rolls, and comment on the dice rolls.
78+
When you are asked to roll a die, you must call the roll_die tool with
79+
the number of sides. Be sure to pass in an integer. Do not pass in a
80+
string.
81+
You should never roll a die on your own.
82+
When checking prime numbers, call the check_prime tool with a list of
83+
integers. Be sure to pass in a list of integers. You should never pass
84+
in a string.
85+
You should not check prime numbers before calling the tool.
86+
When you are asked to roll a die and check prime numbers, you should
87+
always make the following two function calls:
88+
1. You should first call the roll_die tool to get a roll. Wait for the
89+
function response before calling the check_prime tool.
90+
2. After you get the function response from roll_die tool, you should
91+
call the check_prime tool with the roll_die result.
92+
2.1 If user asks you to check primes based on previous rolls, make
93+
sure you include the previous rolls in the list.
94+
3. When you respond, you must include the roll_die result from step 1.
95+
You should always perform the previous 3 steps when asking for a roll
96+
and checking prime numbers.
97+
You should not rely on the previous history on prime results.
98+
""",
99+
tools=[
100+
roll_die,
101+
check_prime,
102+
],
103+
)

src/google/adk/models/__init__.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@
3131

3232

3333
LLMRegistry.register(Gemini)
34+
# Gemma 3 integration (provides function calling workarounds).
35+
# For Gemma 4+, use Gemini or LiteLlm directly.
3436
LLMRegistry.register(Gemma)
3537
LLMRegistry.register(ApigeeLlm)
3638

@@ -54,7 +56,8 @@
5456
# LiteLLM support requires: pip install google-adk[extensions]
5557
pass
5658

57-
# Optionally register Gemma3Ollama if litellm package is installed
59+
# Gemma 3 on Ollama (provides function calling workarounds).
60+
# For Gemma 4+ on Ollama, use LiteLlm directly.
5861
try:
5962
from .gemma_llm import Gemma3Ollama
6063

src/google/adk/models/gemma_llm.py

Lines changed: 37 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,16 @@
3939

4040

4141
class GemmaFunctionCallingMixin:
42-
"""Mixin providing function calling support for Gemma models.
42+
"""Mixin providing function calling support for Gemma 3 models.
4343
44-
Gemma models don't have native function calling support, so this mixin
44+
Gemma 3 models don't have native function calling support, so this mixin
4545
provides the logic to:
4646
1. Convert function declarations to system instruction prompts
4747
2. Convert function call/response parts to text in the conversation
4848
3. Extract function calls from model text responses
49+
50+
This mixin is NOT needed for Gemma 4+, which supports function calling
51+
natively through the standard Gemini/LiteLLM integrations.
4952
"""
5053

5154
def _move_function_calls_into_system_instruction(
@@ -161,31 +164,29 @@ class GemmaFunctionCallModel(BaseModel):
161164

162165

163166
class Gemma(GemmaFunctionCallingMixin, Gemini):
164-
"""Integration for Gemma models exposed via the Gemini API.
167+
"""Integration for Gemma 3 models exposed via the Gemini API.
168+
169+
This class is for **Gemma 3 only**. It provides workarounds for Gemma 3's
170+
lack of native function calling and system instruction support:
171+
- Tools are injected into text prompts (not passed via the API)
172+
- Function calls are parsed from model text responses
173+
- System instructions are converted to user-role messages
174+
175+
For Gemma 4 and later, use the standard ``Gemini`` class directly::
176+
177+
# Gemma 4 — use Gemini (native function calling & system instructions)
178+
agent = Agent(model=Gemini(model="gemma-4-<size>"), ...)
179+
180+
# Gemma 3 — use this class (workarounds applied automatically)
181+
agent = Agent(model=Gemma(model="gemma-3-27b-it"), ...)
165182
166-
Only Gemma 3 models are supported at this time. For agentic use cases,
167-
use of gemma-3-27b-it and gemma-3-12b-it are strongly recommended.
183+
For agentic use cases with Gemma 3, ``gemma-3-27b-it`` and ``gemma-3-12b-it``
184+
are strongly recommended.
168185
169186
For full documentation, see: https://ai.google.dev/gemma/docs/core/
170187
171-
NOTE: Gemma does **NOT** support system instructions. Any system instructions
172-
will be replaced with an initial *user* prompt in the LLM request. If system
173-
instructions change over the course of agent execution, the initial content
174-
**SHOULD** be replaced. Special care is warranted here.
175-
See:
176-
https://ai.google.dev/gemma/docs/core/prompt-structure#system-instructions
177-
178-
NOTE: Gemma's function calling support is limited. It does not have full
179-
access to the
180-
same built-in tools as Gemini. It also does not have special API support for
181-
tools and
182-
functions. Rather, tools must be passed in via a `user` prompt, and extracted
183-
from model
184-
responses based on approximate shape.
185-
186-
NOTE: Vertex AI API support for Gemma is not currently included. This **ONLY**
187-
supports
188-
usage via the Gemini API.
188+
NOTE: This class only supports the Gemini API (Google AI Studio).
189+
Vertex AI API support is not included.
189190
"""
190191

191192
model: str = (
@@ -365,12 +366,20 @@ def _get_last_valid_json_substring(text: str) -> tuple[bool, str | None]:
365366
class Gemma3Ollama(GemmaFunctionCallingMixin, LiteLlm):
366367
"""Integration for Gemma 3 models running locally via Ollama.
367368
368-
This enables fully local agent workflows using Gemma 3 models.
369-
Requires Ollama to be running with a Gemma 3 model pulled.
369+
This class is for **Gemma 3 only**. It provides the same function calling
370+
workarounds as the ``Gemma`` class, but routes through Ollama via LiteLLM.
371+
372+
For Gemma 4 and later on Ollama, use the standard ``LiteLlm`` class::
373+
374+
# Gemma 4 on Ollama — use LiteLlm directly
375+
agent = Agent(model=LiteLlm(model="ollama_chat/gemma4:<size>"), ...)
376+
377+
# Gemma 3 on Ollama — use this class
378+
agent = Agent(model=Gemma3Ollama(), ...)
379+
380+
Requires Ollama to be running with a Gemma 3 model pulled::
370381
371-
Example:
372-
ollama pull gemma3:12b
373-
model = Gemma3Ollama(model="ollama/gemma3:12b")
382+
ollama pull gemma3:12b
374383
"""
375384

376385
def __init__(self, model: str = 'ollama/gemma3:12b', **kwargs):

src/google/adk/models/google_llm.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -142,6 +142,8 @@ def supported_models(cls) -> list[str]:
142142

143143
return [
144144
r'gemini-.*',
145+
# Gemma 4+ works natively with Gemini (no workarounds needed).
146+
r'gemma-4.*',
145147
# model optimizer pattern
146148
r'model-optimizer-.*',
147149
# fine-tuned vertex endpoint pattern

tests/unittests/models/test_gemma_llm.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -507,6 +507,17 @@ def test_process_response_last_json_object():
507507
assert part.text is None
508508

509509

510+
# Tests for Gemma 4 registry routing
511+
def test_gemma4_resolves_to_gemini_not_gemma():
512+
"""Gemma 4 models should use the standard Gemini class, not the Gemma
513+
workaround class."""
514+
from google.adk.models.google_llm import Gemini
515+
516+
resolved = models.LLMRegistry.resolve("gemma-4-31b-it")
517+
assert resolved is not Gemma
518+
assert resolved is Gemini
519+
520+
510521
# Tests for Gemma3Ollama (only run when LiteLLM is installed)
511522
try:
512523
from google.adk.models.gemma_llm import Gemma3Ollama

0 commit comments

Comments
 (0)