Publishing options

Revision as of 19:20, 24 March 2021 by P.petrelli (talk | contribs)

Recently journal editors have updated their data policy and now require that data relating to the submitted paper should be made available by the authors. From the AGU data policy : "..all data necessary to understand, evaluate, replicate, and build upon the reported research must be made available and accessible whenever possible ..."

The aim of this change in the policy is to satisfy the principle that someone reading the paper should be able to reproduce your experiment. Again form the AGU policy : " For the purposes of this policy, data include, but are not limited to, the following:

  • Data used to generate, or be displayed in, figures, graphs, plots, videos, animations, or tables in a paper.
  • New protocols or methods used to generate the data in a paper.
  • New code/computer software used to generate results or analyses reported in the paper.
  • Derived data products reported or described in a paper. "

There can be practical and even copyright limitations to do this, but these can be taken into account and it should not be an impediment to publication if properly documented. The JGR-Space Physics editor-in-chief has listed some of these challenges and clarified the scope of the policy on his blog. It is an old blog but includes references to model data which apply to lots of the CLEX data as well.

The publishing policies page offer a list of data policies and requirements by publisher.

 

Where should I publish my data?

You usually have at least three options and there's not a straight answer it depends on what you're publishing and why.

CLEX Data Collection on NCI: this is the best place if you have actually produced the data yourself. We'll help you document your data and make it user friendly, it will be part of a climate data collection and so it will be easier to discover. NCI also has more storage capacity than other repositories and services which are designed around the netcdf format.

Institutional repository: this can be adeguate if you have a small dataset and it is really specific to your study or only a subset/post processing of another dataset. While institutions offer some data curation they usually won't check the data is well described, consistent and user friendly, so you will get a DOI for your record but no added value.

Zenodo, Figshare, Mendeley: these services are free and you can create your own account, create a record for your data and get a DOI fairly easily and quickly. You can also publish here different kind of materials. This can be useful if you want to publish some very specific data, for example code and data to produce a specific figure required to publish a paper. If you are publishing data that can be actually useful to others, then you want to make sure is FAIR (see above for a full description). Findable, which is harder when your record is not part of a discipline repository or collection or you haven't used keywords in an effective manner. Accessible, Interoperable and Reusable which means the data should have enough metadata, use standards, be properly described. Finally the data size is limited to 50 GB and you won't get any additional data services apart from HTTP download. This should be your last resort, still if you go this way please make sure you document your data properly. we are happy to provide support and review your record. We are also looking into creating  CLEX community in Zenodo to be able to collect in one place these data records.

If you're confused feel free to ask us we are always happy to provide advice and support.

Where should I publish my code?

More and more frequently you are reuqired to publish also your code by journal editors. This is important to make your research reproducible, in particular for the paper reviewers. It can also be an opportunity to share a successful code with other researchers. If your code is properly published and someone else uses it, then it is easier for them to cite you as the author. Like data, code represents part of your work and funders are starting looking at all research products not only papers when reviewing applications. Putting your code on GitHub or another version control service helps to keep track of the code, expose it to others and manage pontetial issue sans enhancements. But GitHub is not ideal if you want to pinpoint the code you used for a paper or to create some data. Zenodo is a platform that integrates well with GitHub to allow you to publish automatically every release. Zenodo will keep a snapshot in time of your code and assign a DOI to it. if you're not using GitHub you can simply upload your files directly in Zenodo. These are the reasons we started a Zenodo community for a CLEX Code Collection (CCC)in 2020. We published there some of our code and code used to produce papers as required by journal editors. We are now looking into broadening this and actively seek contributions of code, notebooks etc that might be useful to others. Zenodo has given our codes much more visibility than GitHub and some of the code has receveid lots of views and downloads. 

How to publish

  • Publishing with NCI
  • Publishing with your instituion
  • Publishing code in Zenodo CLEX Code Collection
  • Publishing data in Zenodo CLEX Data Collection (.. to come)


 


Reporting to CLEVER

Whichever way you decide to publish your data, as part of the NCI collection or with a repository provided by your institution, remember to add your published record to CLEVER the CLEX reporting hub, in the "Publications and Datasets" section.

You will need to record there only the main information: author, title, doi and citation. It will only take a couple of minutes. Published datasets are part of the Centre KPIs and something we have to report to our funders.