You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TLDR, the new report will focus on quality per dataset per project. However, some summary per-project status can be distilled to be tracked in the studies table in the NTAP-requested annotationStatus. This could be a translation of:
0/3 datasets validated = "Needs attention"
2/3 datasets validated = "?"
3/3 datasets validated = "All good"
(Might be more informative to just keep the ratios, but on the other hand labels are easier to understand quickly.)
Follow-up on #64. The old codebase has been consolidated into this repo, and in the process reviewed for what renovations are needed for improved utility.
Screenshot of old report:
Ideas/use cases covered in old report:
Examine annotation properties being used to refine/formalize in data model at the time
Examine values across these properties to refine data model at the time
Examine projects with questionable key/values (that shouldn't be there or should be revised) to help clean up projects at the time
Some limitations:
Results can be better presented/summarized
Data pulled from the fileviews specific to that project. However, fileviews don't necessarily surface all metadata
Changes to implement:
Don't just use local fileviews, pull all annotations on files directly
The new crawler would be more intensive, consider implementing with something with better parallelization than R
Assess in context of a template/data type rather than only key/value
Report with more project-centric view vs old report, which is more data-model centric
Update
TLDR, the new report will focus on quality per dataset per project. However, some summary per-project status can be distilled to be tracked in the studies table in the NTAP-requested
annotationStatus
. This could be a translation of:(Might be more informative to just keep the ratios, but on the other hand labels are easier to understand quickly.)
Open for comments. @jaybee84 @allaway @cconrad8
Follow-up on #64. The old codebase has been consolidated into this repo, and in the process reviewed for what renovations are needed for improved utility.
Screenshot of old report:
Ideas/use cases covered in old report:
Some limitations:
Changes to implement:
Mockup for a project with 2 datasets:
The text was updated successfully, but these errors were encountered: