Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor contour_map function and update test cases with image_comparison (as per matplotlib test guidelines) #96

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

xefonon
Copy link
Collaborator

@xefonon xefonon commented Feb 3, 2025

This PR refactors (slightly) the contour_map function in spatial.py and updates the test cases in test_plots.py to use the image_comparison decorator as per Matplotlib test guidelines (https://matplotlib.org/devdocs/devel/testing.html).

Changes Made:
Refactored contour_map Function:

Replaced ax.contour with ax.contourf to prevent breaking and ensure similar-ish results. This solves #91.
Updated Test Cases:

Added @image_comparison decorator to relevant test functions in test_plots.py to validate the visual output against baseline images. This solves (?) #95
Refactored imports to follow Ruff (i.e., imports first, then comment)
Affected Files:
spatial.py
test_plots.py

@xefonon
Copy link
Collaborator Author

xefonon commented Feb 3, 2025

OK so apparently my version of matplotlib is not the same as circleci's, so the images compared are "not the same" I guess.

@f-brinkmann
Copy link
Member

@xefonon I see that the tests are failing because of the image comparison. We've had problems with that in pyfar as well, where the images generated by matplotlib differed depending on the operation system. https://matplotlib.org/stable/api/testing_api.html#matplotlib.testing.decorators.image_comparison allows to set a tolerance, but the errors we get in testing seem to be quite large. We have RMS values between 7 and 10. Matplotlib gives 0.06 as an RMS error due to numerical precision.

Can you maybe check the baseline to see if everything is correct. Not sure if image comparison makes sense here.

@xefonon
Copy link
Collaborator Author

xefonon commented Feb 5, 2025

@xefonon I see that the tests are failing because of the image comparison. We've had problems with that in pyfar as well, where the images generated by matplotlib differed depending on the operation system. https://matplotlib.org/stable/api/testing_api.html#matplotlib.testing.decorators.image_comparison allows to set a tolerance, but the errors we get in testing seem to be quite large. We have RMS values between 7 and 10. Matplotlib gives 0.06 as an RMS error due to numerical precision.

Can you maybe check the baseline to see if everything is correct. Not sure if image comparison makes sense here.

I mean, I created the baseline, so I guess the results can vary depending on the version of one's OS significantly. If no image comparison then what should we do?

@f-brinkmann
Copy link
Member

I mean, I created the baseline, so I guess the results can vary depending on the version of one's OS significantly. If no image comparison then what should we do?

In pyfar we still generate images but inspect the changed images manually. That would be fine for me here as well. @mberz do you agree or should we try something else?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants