Workflow

This chapter describes the workflow for a data publication] from source to publication which is similar to the submission > review > editorial > publication flow established in scientific literature. The editorial process is coordinated by the editor-in-chief in close cooperation with the data librarian, the data [[curators of projects/institutes, the Pangaea editors and data scientists. Since 2011 the workflow and communication of each data submission is documented through a ticket system.

Author(s), projects or institutes have the choice to
 * archive a supplement related to a publication or
 * making results available to the scientific community, e.g. within a research project or
 * publish a data report.

Provided that the data publication workflow is an interaction beween the (corresponding) author and the curator/editor and consists of 5 steps:
 * the granularity of the data submission is defined,
 * parameters are defined with unit; as close to international standards as possible,
 * metadata are on hand,
 * any information, necessary to understand the generation and content of the data set, is available,
 * 1) Submission - the author sends data sets with metadata description for archiving/publication following the submission guidelines and the policy of the project. Central address for submissions is info@pangaea.de or via the ticket system.
 * 2) Completeness check - the data package as submitted by the author is checked by the curator/editor for completeness of metadata and validity of the data. A request will be send to the author if mandatory information is missing.
 * 3) Archiving - the availability of the required metadata is checked via the editorial system (4D), missing information is defined by the curator. Data are converted to the Pangaea import format. For single files an editor or spreadsheet software is used, for multiple files with similar format, the use of PanTool and Split2Events is recommended. With the editorial system the relations are set between metadata and data prior to import. External documents should have PDF/A-format and must be linked via a persistent identifier. After import the availability and vailidity of the dataset(s) is checked in browser-view by the curator. This includes a validity check of all external links.
 * 4) Proof-read - the curator sends the DOI as link to the author(s) and requests for proofread. Through an iterative process between author and curator via service ticket system (or e-mail) the dataset is edited until it is finaly approved by the author.
 * 5) Publication - the data set status is set to published, the DOI becomes valid 4 weeks after the final editing. On request of the author, a password may be set for a muratorium period as defined through a projects data policy or - for supplements - until a paper is public available. Between submission and publication of a paper, the reviewer may have data access via the DOI and thus data can easily be included in the peer-review process.

Data supplements with status supplementary data can automatically be linked to the publications splash page in the journals/publishers catalog through an automatic web service (e.g..

The ressources required to establish this workflow need to become a mandatory part of each research project. The amount is estimated in the business model.