Conversation
|
@calcmogul I'd be curious to hear your input here as well! |
e7a7b5f to
9664cb3
Compare
src/EigenFormat.hpp
Outdated
| // with this file, You can obtain one at http://mozilla.org/MPL/2.0/. | ||
|
|
||
| // From | ||
| // https://gitlab.com/libeigen/eigen/-/raw/d78a97506a159036b824a8a78cbf13be5716ecd2/unsupported/Eigen/Format |
|
|
||
| CalibrationUncertaintyContext create_calibration_uncertainty_context( | ||
| mrcal_problem_selections_t &problem_selections, | ||
| mrcal_lensmodel_t &lensmodel, const std::span<double> intrinsics, |
There was a problem hiding this comment.
Read-write spans should use std::span<double>, and read-only spans should use std::span<const double>.
| } | ||
| } | ||
|
|
||
| Jt_eigen.setFromTriplets(triplets.begin(), triplets.end()); |
There was a problem hiding this comment.
If you can ensure the triplets are added in sorted column-major order (e.g., (0, 0), (1, 0), (2, 0), (0, 1), (1, 1), (2, 1), etc.), you can use setFromSortedTriplets() instead, which should be faster.
|
|
||
| // Solve JtJ * A = dF_dbpacked^T using pre-computed factorization | ||
| Eigen::MatrixXd rhs = dF_dbpacked.transpose(); // (Nstate, 2) | ||
| Eigen::MatrixXd A = context.solver->solve(rhs); // (Nstate, 2) |
There was a problem hiding this comment.
Does assigning to a dense matrix here result in a lot of fill-in compared to the sparse return value, or is the solution essentially dense already?
There was a problem hiding this comment.
I added some solver debug prints to a dataset with 30 pictures of a 9x7 calibration grid. Our dimensions are as follows:
matth@DESKTOP-PNNHDMI:~/mrcal-java$ ./cmake_build/bin/mrcal_jni_test | head
J is 1210x50 with 21610 non-zero
JtJ is 50x50 with 1412 non-zero
dq_db_sparse is 2x50 with 56 non-zero
rhs is 50x2 with 56 non zero
A is 50x2, with 100 non-zero
I haven't been able to find a case yet where A isn't fully dense. Sadly mrcal's mrcal_unpack_solver_state_vector needs a fully dense matrix passed to it as input, so I can't make dF_dbpacked sparse.
src/mrcal-uncertainty.cpp
Outdated
| // Solve JtJ * A = dF_dbpacked^T for ALL points at once | ||
| // rhs shape: (Nstate, 2*Npoints) | ||
| Eigen::MatrixXd rhs = dF_dbpacked.transpose(); | ||
| Eigen::MatrixXd A = context.solver->solve(rhs); // (Nstate, 2*Npoints) |
There was a problem hiding this comment.
Seems like this batching approach is on average the same speed or slightly slower for a 60x40 grid of points with 6 camera snapshots. Baseline is averaging 113ms per solve, and with batching, sitting at 124ms per solve. I've reverted this commit for now.
This reverts commit 50e76b9.
75eb07f to
21a5440
Compare
|
I've uploaded the dataset to https://drive.google.com/file/d/17iuFJK7PoEAjRpSRmQZbJMH_fy_egZOj/view?usp=sharing -- it's 204 pictures at 1280x720 of an 8x8 charuco board, square size 0.75 inches. Our uncertainty estimation strategy took ~2 seconds of fully pegging a single CPU core Mrcal's python code (which for this dataset gives a very different answer which I find concerning takes 2-3 seconds. I exported a cameramodel using our json -> mrcal cameramodel tooling. So we're 30% faster (????) than mrcal? lit.
|
Is there a way to check which output is correct? Otherwise, the next step would be figuring out where the internal states of the two implementations diverge. The speedup could be caused by an implementation bug. |
|
It looks like the bug is in how PhotonVision exports camera calibration data and creates a mrcal cameramodel for charuco boards. When I exported a dataset of 104 chessboard images (see the new MR description), the results are visually at least identical. |
| } | ||
| } | ||
|
|
||
| public double[] framePosesToRtToref() { |
There was a problem hiding this comment.
PhotonVision/photonvision#2344 has some cursed code that leaks a lot of mrcal-specific code into our monorepo. This function actually isn't used afaict. We should decide at what level this marshalling-to-raw-lists lives (i'd argue, in the JNI) and make sure that happens before merge.
src/mrcal-uncertainty.cpp
Outdated
| dq_db.block(0, frame_start, 2, 3) = dq_dframes[frame] / Nboards; | ||
| } | ||
|
|
||
| // Eigen::IOFormat CommaFmt(Eigen::StreamPrecision, Eigen::DontAlignCols, ", |
There was a problem hiding this comment.
reminder to delete all remaining debug prints

We have docs on exporting calibrations to mrcal to run uncertainty analysis tools. At great pain, we are able to port their code to C++. With this hard-coded dataset of 104 pictures of a 10x10 chessboard (from mrgingham), I'm seeing our estimation take HALF as much time as mrcal's Python implementation (!!!!!!!!!!!)