-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
** Please make sure you read the contribution guide and file the issues in the right place. **
Contribution guide.
Is your feature request related to a problem? Please describe.
I am trying to implement regression tests using pytest by loading evalset.json and a rubric, following the instructions in the document below:
10. CI/CD with Pytest (pytest)
However, the agent I am developing uses tools that call APIs returning real-time data from an external system. Because the tool outputs are dynamic, there is a risk that the evaluation results (pass/fail) may change depending on when the test is executed.
To address this, following the traditional unit testing approach, I would like to mock the tool implementations during eval execution so that they return fixed JSON responses for testing purposes.
Is there currently any built-in support in ADK for mocking tools in this way?
Describe the solution you'd like
When the evaluation is executed, the tool is given the expected JSON to be returned, and the tool is mocked so that the specified JSON is returned as a fixed value.
Describe alternatives you've considered
I have already implemented an approach where the agent checks an environment variable and switches the exported tool accordingly.
However, this approach is quite custom, and it requires modifying production code solely for the purpose of mocking, which I would prefer to avoid.
Below is a simplified example of the current implementation.
call_api_tool.py
# create CallApiTool And MockCallApi Tool by BaseTool
if os.getenv("IS_EVAL") == "1":
call_api_tool = MockCallApiTool()
else:
call_api_tool = CallApiTool()
agent.py
root_agent = Agent(
name="my_api_agent",
description="Agent that calls an external API",
tools=[call_api_tool],
)
Ideally, I would like to mock or override tool implementations during eval execution without introducing environment-based branching logic into the agent or tool source code.
Additional context
None