🔴 Required Information
Describe the Bug:
When using runner.run_live() with the gemini-3.1-flash-live-preview model on AI Studio, the session immediately fails with APIError: 1007 None. Request contains an invalid argument.
The root cause is that the ADK sends user content via send_client_content (in base_llm_flow.py line ~670), but Gemini 3.1 Flash Live only allows send_client_content for seeding initial context history (requiring initial_history_in_client_content in history_config). Text updates during the conversation must be sent via send_realtime_input instead.
The raw google-genai SDK connects and works fine with this model — the issue is specific to how the ADK orchestrates the live session.
Steps to Reproduce:
- Set
GOOGLE_GENAI_USE_VERTEXAI=false and GOOGLE_API_KEY=<your-key> in environment
- Run the minimal reproduction code below
- Observe
APIError: 1007 None. Request contains an invalid argument.
Expected Behavior:
The ADK should successfully establish a live session with gemini-3.1-flash-live-preview and handle user content via send_realtime_input (as required by the Gemini 3.1 Live API migration guide).
Observed Behavior:
The websocket connection is established, but the first send_client_content call from the ADK triggers an immediate disconnection with error code 1007 ("Request contains an invalid argument").
google.genai.errors.APIError: 1007 None. Request contains an invalid argument.
Full traceback:
File "google/adk/flows/llm_flows/base_llm_flow.py", line 526, in run_live
File "google/adk/flows/llm_flows/base_llm_flow.py", line 698, in _receive_from_model
File "google/adk/models/gemini_llm_connection.py", line 172, in receive
File "google/genai/live.py", line 454, in receive
File "google/genai/live.py", line 545, in _receive
google.genai.errors.APIError: 1007 None. Request contains an invalid argument.
Environment Details:
- ADK Library Version: 1.27.2 (also tested with 1.28.0 — same issue)
- Desktop OS: Linux (Ubuntu)
- Python Version: 3.12.11
Model Information:
- Are you using LiteLLM: No
- Which model is being used:
gemini-3.1-flash-live-preview
🟡 Optional Information
Regression:
This is not a regression — gemini-3.1-flash-live-preview is a new model with different API behavior from gemini-live-2.5-flash. The ADK works correctly with gemini-live-2.5-flash.
Logs:
The ADK's google_llm.py connect() method builds the LiveConnectConfig with these fields (intercepted via monkey-patch):
Config keys: ['response_modalities', 'system_instruction', 'tools', 'input_audio_transcription', 'output_audio_transcription']
response_modalities: ['AUDIO']
system_instruction: {'parts': [{'text': '...'}], 'role': 'system'}
tools: [{'function_declarations': [{'description': '...', 'name': 'get_time'}]}]
input_audio_transcription: {}
output_audio_transcription: {}
The setup/connect itself succeeds — the error occurs when the ADK subsequently calls send_client_content to deliver user content.
Additional Context:
There are two incompatibilities in the ADK when used with gemini-3.1-flash-live-preview. Both stem from breaking changes in the Gemini 3.1 Live API documented in the migration guide and the Gemini Live API skill reference:
Issue 1: send_client_content used for user messages during conversation
Per the Gemini 3.1 docs:
Important: Use send_realtime_input / sendRealtimeInput for all real-time user input (audio, video, and text). send_client_content / sendClientContent is only supported for seeding initial context history (requires setting initial_history_in_client_content in history_config). Do not use it to send new user messages during the conversation.
The ADK currently uses send_client_content (LiveClientContent) for all user content in _send_to_model() (base_llm_flow.py line ~670):
# base_llm_flow.py line ~670
await llm_connection.send_content(live_request.content)
Which calls gemini_llm_connection.py lines 115-120:
await self._gemini_session.send(
input=types.LiveClientContent(
turns=[content],
turn_complete=True,
)
)
For Gemini 3.1 models, text content during the conversation needs to be sent via send_realtime_input(text=...) instead.
Issue 2: media= parameter used for audio in send_realtime_input
Per the Gemini 3.1 docs:
Warning: Do not use media in sendRealtimeInput. Use the specific keys: audio for audio data, video for images/video frames, and text for text input.
The ADK currently sends audio blobs using the deprecated media= parameter in gemini_llm_connection.py line 131:
# gemini_llm_connection.py line 131
await self._gemini_session.send_realtime_input(media=input)
For Gemini 3.1 models, this needs to be:
await self._gemini_session.send_realtime_input(audio=input)
Additional notes
- Gemini 3.1 also changes the native input audio format to 16kHz (
audio/pcm;rate=16000) instead of 24kHz.
- Confirmed that the raw
google-genai SDK works perfectly with this model — connecting with system_instruction, tools, speech_config, and AudioTranscriptionConfig all succeed when using client.aio.live.connect() directly.
Minimal Reproduction Code:
import asyncio
import os
from dotenv import load_dotenv
load_dotenv()
from google.genai import types
from google.adk import agents, runners, sessions
from google.adk.agents import run_config as run_config_lib
async def main():
agent = agents.Agent(
name="TestAgent",
description="Test agent",
model="gemini-3.1-flash-live-preview",
instruction="You are a helpful assistant.",
tools=[],
)
session_service = sessions.InMemorySessionService()
runner = runners.Runner(
app_name="test", agent=agent, session_service=session_service
)
sess = await session_service.create_session(
session_id="test-1", app_name="test", user_id="u1"
)
queue = agents.LiveRequestQueue()
config = run_config_lib.RunConfig(
streaming_mode=run_config_lib.StreamingMode.BIDI,
response_modalities=[types.Modality.AUDIO],
)
live_events = runner.run_live(
session=sess, live_request_queue=queue, run_config=config
)
queue.send_content(
types.Content(role="user", parts=[types.Part(text="Hello")])
)
async for event in live_events:
print(f"Event: {event}")
break
queue.close()
# Requires: GOOGLE_GENAI_USE_VERTEXAI=false and GOOGLE_API_KEY set
asyncio.run(main())
How often has this issue occurred?:
🔴 Required Information
Describe the Bug:
When using
runner.run_live()with thegemini-3.1-flash-live-previewmodel on AI Studio, the session immediately fails withAPIError: 1007 None. Request contains an invalid argument.The root cause is that the ADK sends user content via
send_client_content(inbase_llm_flow.pyline ~670), but Gemini 3.1 Flash Live only allowssend_client_contentfor seeding initial context history (requiringinitial_history_in_client_contentinhistory_config). Text updates during the conversation must be sent viasend_realtime_inputinstead.The raw
google-genaiSDK connects and works fine with this model — the issue is specific to how the ADK orchestrates the live session.Steps to Reproduce:
GOOGLE_GENAI_USE_VERTEXAI=falseandGOOGLE_API_KEY=<your-key>in environmentAPIError: 1007 None. Request contains an invalid argument.Expected Behavior:
The ADK should successfully establish a live session with
gemini-3.1-flash-live-previewand handle user content viasend_realtime_input(as required by the Gemini 3.1 Live API migration guide).Observed Behavior:
The websocket connection is established, but the first
send_client_contentcall from the ADK triggers an immediate disconnection with error code 1007 ("Request contains an invalid argument").Full traceback:
Environment Details:
Model Information:
gemini-3.1-flash-live-preview🟡 Optional Information
Regression:
This is not a regression —
gemini-3.1-flash-live-previewis a new model with different API behavior fromgemini-live-2.5-flash. The ADK works correctly withgemini-live-2.5-flash.Logs:
The ADK's
google_llm.pyconnect()method builds theLiveConnectConfigwith these fields (intercepted via monkey-patch):The setup/connect itself succeeds — the error occurs when the ADK subsequently calls
send_client_contentto deliver user content.Additional Context:
There are two incompatibilities in the ADK when used with
gemini-3.1-flash-live-preview. Both stem from breaking changes in the Gemini 3.1 Live API documented in the migration guide and the Gemini Live API skill reference:Issue 1:
send_client_contentused for user messages during conversationPer the Gemini 3.1 docs:
The ADK currently uses
send_client_content(LiveClientContent) for all user content in_send_to_model()(base_llm_flow.pyline ~670):Which calls
gemini_llm_connection.pylines 115-120:For Gemini 3.1 models, text content during the conversation needs to be sent via
send_realtime_input(text=...)instead.Issue 2:
media=parameter used for audio insend_realtime_inputPer the Gemini 3.1 docs:
The ADK currently sends audio blobs using the deprecated
media=parameter ingemini_llm_connection.pyline 131:For Gemini 3.1 models, this needs to be:
Additional notes
audio/pcm;rate=16000) instead of 24kHz.google-genaiSDK works perfectly with this model — connecting withsystem_instruction,tools,speech_config, andAudioTranscriptionConfigall succeed when usingclient.aio.live.connect()directly.Minimal Reproduction Code:
How often has this issue occurred?: