⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Conversation

@tachella
Copy link
Contributor

@tachella tachella commented Jan 16, 2026

This PR adds the run_benchmark function to this repo such that users can quickly evaluate their model on a given benchmark as

from deepinv_bench import run_benchmark
my_solver = lambda y, physics: ...  # your solver here
results = run_benchmark(my_solver, "benchmark_name")

@tomMoral is there an easy way to do this with benchopt? see todo in the PR :)

Copy link
Contributor Author

@tachella tachella left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

many thanks for the proposed solution!

I'm getting an error when running locally, I was wondering if you get the same?

from deepinv.models import Reconstructor


class DummyModel(Reconstructor):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this?

return results


if __name__ == "__main__":
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting an error when running locally, do you get the same?

ValueError: Objective.evaluate_result() should contain a key named 'value' to be used with this stopping_criterion. The name of this key can be changed via the 'key_to_monitor' parameter. Available keys are ['PSNR', 'PSNR_std', 'NIQE', 'NIQE_std', 'runtime']

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants