You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This seems relatively easy to wrap into the existing UNF standard. Treat a file as a binary vector, base64 encode it, hash using SHA256, and truncate to the specified UNF length. This has could then be aggregated just like dataset UNFs are currently combined to create the study-level UNF.
Even if MD5 is a common standard in archiving, SHA256 seems reasonably widely implemented and would be consistent with the existing UNF standard. I think you would still have to supply MD5's somewhere in Dataverse though, given their prevalence as a checksum.
We would like to apply a new, more general algorithm to UNF to apply it across all files. See original discussion on this in IQSS/dataverse#2192
Functional Requirements Document (FRD) will be created and linked to this issue.
The text was updated successfully, but these errors were encountered: