How can I change the configured model for an LlmAgent at runtime? #4190
Replies: 2 comments 1 reply
-
|
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: The model for an Hi there! That's an interesting use case. The The most effective way to achieve your goal is to adopt a multi-agent approach. You can create two distinct
Here is a conceptual example of how you might set this up: from google.adk.agents import LlmAgent
from google.adk.models import Gemini
# Agent for text-based interactions
text_agent = LlmAgent(
model=Gemini(model="gemini-2.5-flash"),
instruction="You are a helpful text-based assistant."
)
# Agent for live audio interactions
audio_agent = LlmAgent(
model=Gemini(model="gemini-live-2.5-flash-native-audio"),
instruction="You are a helpful voice assistant."
)
# --- In your application logic ---
def process_request(is_live_audio: bool, user_input: str):
if is_live_audio:
# Use the runner with the agent configured for audio
print("Using the live audio agent...")
# runner.run(agent=audio_agent, ...)
else:
# Use the runner with the agent configured for text
print("Using the text agent...")
# runner.run(agent=text_agent, ...)In this pattern, your application logic would determine whether the context is live audio or text and then select the appropriate agent to process the request. This keeps the concerns of each modality separate and clean. This is conceptually similar to how different agents can be used for different tasks in a multi-agent system. [4] Let me know if you have any other questions! |
Beta Was this translation helpful? Give feedback.
-
|
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: The recommended approach is to create two separate Hi @tdakanalis, That's a great question! While you can't directly change the model of an existing Here's how you can achieve this:
Here is a conceptual example in Python: from google.adk.agents import LlmAgent
# 1. Define the specialized agents
text_agent = LlmAgent(
name="TextAgent",
model="gemini-2.5-flash",
description="Handles text-based conversations."
)
audio_agent = LlmAgent(
name="AudioAgent",
model="gemini-live-2.5-flash-native-audio",
description="Handles live audio conversations."
)
# 2. Define the coordinator agent
coordinator = LlmAgent(
name="CoordinatorAgent",
model="gemini-2.5-flash", # A lightweight model is usually sufficient for routing
instruction="""
You are a coordinator agent. Your job is to route the user's request
to the correct sub-agent. If the user is interacting via text,
delegate to the TextAgent. If the user is interacting via audio,
delegate to the AudioAgent.
""",
sub_agents=[text_agent, audio_agent]
)This approach is flexible and aligns with the structured multi-agent patterns that ADK is designed to support. You can find more information about this in the ADK documentation on Multi-Agent Systems. [1] Let me know if you have any other questions! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have a use case where I want to use the same agent in both text and live audio modes. My challenge is that I want to use gemini-2.5-flash for text and gemini-live-2.5-flash-native-audio for live audio. How can I achieve this?
Beta Was this translation helpful? Give feedback.
All reactions