Skip to content

Latest commit

 

History

History
95 lines (61 loc) · 4.98 KB

5.md

File metadata and controls

95 lines (61 loc) · 4.98 KB

Peer Review of RO-5

  • Author: Kyle Chard Niall Gaffney, Matthew B. Jones, Kacper Kowalik, Bertram Ludäscher, Jarek Nabrzyski, Victoria Stodden, Ian Taylor, Thomas Thelen, Matthew J. Turk, Craig Willis
  • Title: Application of BagIt-Serialized Research Object Bundles for Packaging and Re-execution of Computational Analyses
  • Submission: https://doi.org/10.5281/zenodo.3271763
  • Submitted as: RO2019 article
  • Decision: ACCEPT paper (accepted for proceeding and oral presentation)

Review 1

  • Reviewer: (anonymous)

Quality of Writing

Is the text easy to follow? Are core concepts defined or referenced? Is it clear what is the author's contribution?

  • excellent

The article is clear an easy to understand.

Research Object / Zenodo

URL for a Research Object or Zenodo record provided?   Guidelines followed?   Open format (e.g. HTML)?   Sufficient metadata, e.g. links to software?   Some form of Data Package provided?   Add text below if you need to clarify your score.

  • basic (e.g. Zenodo with PDF and minimal metadata)

I didn't see a reference to Zenodo with the article packaged, but it does link to a zenodo example. https://zenodo.org/record/2641314#.XTop35MzbOQ

Overall evaluation

Please provide a brief review, including a justification for your scores. Both score and review text are required.

  • strong accept

This is a clear, well written article which describes the use of Research Objects in the Whole Tale system which allows for publishing of re-runnable, well-described objects. It covers the design well and discusses implementation issues.

I would have liked a little more detail on how users can run the system locally but given that the focus is on the packaging format that is OK.

There is one MAJOR issue - the references are all showing up for me in the PDF as [?] though they are numbered in the reference section at the end.

Review 2

  • Reviewer: (anonymous)

Quality of Writing

Is the text easy to follow? Are core concepts defined or referenced? Is it clear what is the author's contribution?

  • good

Good writing. Would prefer more detail on the scenario over some of the abstraction in the requirements section.

Research Object / Zenodo

URL for a Research Object or Zenodo record provided?   Guidelines followed?   Open format (e.g. HTML)?   Sufficient metadata, e.g. links to software?   Some form of Data Package provided?   Add text below if you need to clarify your score.

  • good (followed guidelines, demonstrating own format, related resources included, but some details missing)

Zenodo with HTML and example of format

Overall evaluation

Please provide a brief review, including a justification for your scores. Both score and review text are required.

  • accept

This paper provides some nice details about how Research Objects are being explored in a particular platform (Whole Tale), and how the BagIt serialization is being used.

I think RO-Crate discussions will be informed by this experience. The paper has some interesting discussion points, but I felt that the scenario described in Section II could be better woven through the rest of the paper.

Specifically, the requirements and discussion are more abstract and harder to understand because they are not grounded in the scenario.

  • It's not clear the scenario in Section II is carried through in the rest of the paper. The appendix seems to describe humans and hydrology, and the provenance in Fig. 3 discusses isotopes. Suggest trying to make this more cohesive.
  • With respect to the SCM discussion, do the various files work well with diffs (e.g. are JSON files kept ordered)?
  • It seems there is a lot going on in the Whole Tale system, but it's not clear how it all works (a more concrete scenario might help here, too). For example, VI.B states provenance could be generated by Whole Tale, but Section IV indicates this doesn't exist yet. Similarly, the "Re-execute" step in Fig. 1 also seems to be in progress (with respect to validation). Is it clear that the current model will support this, or could things change?
  • With respect to the "Simplicity and Understanding" requirement, does the BagIt serialization help? It seems like many users would prefer an app that (perhaps the dashboard?) that presents this information rather than examining the serialization. This seems to be aligned with the .bagit discussion in Section VI.F
  • V.C notes that the environment data is not described using standard vocabularies. Is this a typo, or should there be some explanation about this decision?
  • Is external generic data limited to HTTP? Can HTTPS be used? Otherwise, a note about this restriction would be helpful.
  • Remove \end{verbatim} at the end of the appendix
  • The PDF still seems to be missing the references, even in Version 2. The HTML version has them.
  • Section II references Figure II which should be Fig. 1