-
Notifications
You must be signed in to change notification settings - Fork 14
feat(grader): add metric_type parameter to ToolCallSequenceMatchGrader #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat(grader): add metric_type parameter to ToolCallSequenceMatchGrader #64
Conversation
- Add metric_type parameter to control score calculation when strict_mode=False and use_jaccard_similarity=False - Support 'recall' (default): matched_count / reference_count - Support 'precision': matched_count / predicted_count - Add validation for metric_type parameter - Add corresponding test cases
Summary of ChangesHello @helloml0326, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively adds a metric_type parameter to the ToolCallSequenceMatchGrader, allowing users to choose between 'recall' and 'precision' for scoring in non-strict, non-Jaccard mode. The implementation is well-supported by validation and a comprehensive set of new tests. My review includes a couple of suggestions to enhance type safety and improve the readability of the new scoring logic. Overall, this is a solid contribution.
| self, | ||
| strict_mode: bool = True, | ||
| use_jaccard_similarity: bool = True, | ||
| metric_type: str = "recall", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For improved type safety and code clarity, consider using typing.Literal for the metric_type parameter instead of str. This makes the allowed string values ('recall', 'precision') explicit for static analysis tools and developers reading the code.
You would need to add from typing import Literal to the imports at the top of the file and change the signature to:
metric_type: Literal["recall", "precision"] = "recall",The runtime validation on line 67 is still valuable and should be kept.
| matched_count = len(gt_tool_names) - len(missing) | ||
| if self.metric_type == "recall": | ||
| # Recall: matched / reference | ||
| if len(gt_tool_names) > 0: | ||
| step_score = matched_count / len(gt_tool_names) | ||
| else: | ||
| step_score = 1.0 | ||
| else: # precision | ||
| # Precision: matched / predicted | ||
| if len(pred_tool_names) > 0: | ||
| step_score = matched_count / len(pred_tool_names) | ||
| else: | ||
| step_score = 0.0 if len(gt_tool_names) > 0 else 1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for calculating the step_score is correct, but the nested if/else statements can be simplified to improve readability. By defining the denominator based on the metric type and then handling the zero-denominator edge cases separately, the code becomes flatter and easier to follow.
| matched_count = len(gt_tool_names) - len(missing) | |
| if self.metric_type == "recall": | |
| # Recall: matched / reference | |
| if len(gt_tool_names) > 0: | |
| step_score = matched_count / len(gt_tool_names) | |
| else: | |
| step_score = 1.0 | |
| else: # precision | |
| # Precision: matched / predicted | |
| if len(pred_tool_names) > 0: | |
| step_score = matched_count / len(pred_tool_names) | |
| else: | |
| step_score = 0.0 if len(gt_tool_names) > 0 else 1.0 | |
| matched_count = len(gt_tool_names) - len(missing) | |
| if self.metric_type == "recall": | |
| denominator = len(gt_tool_names) | |
| if denominator == 0: | |
| step_score = 1.0 # Perfect recall if no reference tools are expected | |
| else: | |
| step_score = matched_count / denominator | |
| else: # precision | |
| denominator = len(pred_tool_names) | |
| if denominator == 0: | |
| # If no tools predicted, score is 1.0 only if no tools were expected | |
| step_score = 1.0 if not gt_tool_names else 0.0 | |
| else: | |
| step_score = matched_count / denominator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we have metadata, should we also consider expose these info like matched_count/denominator?
OpenJudge Version
[The version of OpenJudge you are working on, e.g.
import openjudge; print(openjudge.__version__)]Description
[Please describe the background, purpose, changes made, and how to test this PR]
Checklist
Please check the following items before code is ready to be reviewed.
pre-commit run --all-filescommand