⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Repository containing code for running DeepInverse benchmarks

Notifications You must be signed in to change notification settings

deepinv/benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepInverse Benchmarks

This repository contains a set benchmarks for evaluating the performance of different image reconstruction methods implemented in the DeepInverse library.

Leaderboards are automatically generated and can be found in the DeepInverse benchmarks documentation.

Benchmark results are stored in a HuggingFace repository: https://huggingface.co/datasets/deepinv/benchmarks/tree/main

Evaluating Your Reconstruction Methods

To evaluate your own reconstruction methods on these benchmarks, install deepinv_bench:

pip install git+https://github.com/deepinv/benchmarks.git#egg=deepinv_bench

If you have already installed benchmarks, you can update it with:

pip install --upgrade --force-reinstall --no-deps git+https://github.com/deepinv/benchmarks.git#egg=deepinv_bench

and then run on python:

from deepinv_bench import run_benchmark
import deepinv as dinv
my_solver = dinv.models.RAM() # replace with your reconstruction method
results = run_benchmark(my_solver, "benchmark_name")

where benchmark_name is the name of the benchmark and my_solver is your reconstruction method which receives (y, physics) where

Adding New Solvers

To add a new solver to an existing benchmark, open a new pull request on this repository, adding a new your_solver_name.py file in the corresponding benchmark folder. Follow the structure of the existing solver files. The new solver will be automatically run once the pull request is merged, and the results will be added to the leaderboard.

Adding New Benchmarks

To create a new benchmark, open a new pull request adding a new folder following the structure given in the existing benchmark_template folder.

A new benchmark requires:

If you would like to propose a new dataset, metric or forward operator, please open an issue.

About

Repository containing code for running DeepInverse benchmarks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages