You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 22, 2020. It is now read-only.
Would it be useful to add integrations such as CodeCov to continuously monitor whether a test really adds more coverage / a feature reduces the coverage?
Go through and add unit tests for Django/DRF or Python functions in the prediction service. (This is correctness testing separate from ML evaluation.)
Tests should not unduly slow down the build.
Points will be awarded continuously through the end of the competition -- this issue will not close.
The text was updated successfully, but these errors were encountered: