This year, metadata development is one of our key priorities and we’re making a start with the release of version 5.4.0 of our input schema with some long-awaited changes. This is the first in what will be a series of metadata schema updates.
What is in this update?
Publication typing for citations
This is fairly simple; we’ve added a ‘type’ attribute to the citations members supply. This means you can identify a journal article citation as a journal article, but more importantly, you can identify a dataset, software, blog post, or other citation that may not have an identifier assigned to it. This makes it easier for the many thousands of metadata users to connect these citations to identifiers. We know many publishers, particularly journal publishers, do collect this information already and will consider making this change to deposit citation types with their records.
Every year we release metadata for the full corpus of records registered with us, which can be downloaded for free in a single compressed file. This is one way in which we fulfil our mission to make metadata freely and widely available. By including the metadata of over 165 million research outputs from over 20,000 members worldwide and making them available in a standard format, we streamline access to metadata about scholarly objects such as journal articles, books, conference papers, preprints, research grants, standards, datasets, reports, blogs, and more.
Today, we’re delighted to let you know that Crossref members can now use ROR IDs to identify funders in any place where you currently use Funder IDs in your metadata. Funder IDs remain available, but this change allows publishers, service providers, and funders to streamline workflows and introduce efficiencies by using a single open identifier for both researcher affiliations and funding organizations.
As you probably know, the Research Organization Registry (ROR) is a global, community-led, carefully curated registry of open persistent identifiers for research organisations, including funding organisations. It’s a joint initiative led by the California Digital Library, Datacite and Crossref launched in 2019 that fulfills the long-standing need for an open organisation identifier.
We began our Global Equitable Membership (GEM) Program to provide greater membership equitability and accessibility to organizations in the world’s least economically advantaged countries. Eligibility for the program is based on a member’s country; our list of countries is predominantly based on the International Development Association (IDA). Eligible members pay no membership or content registration fees. The list undergoes periodic reviews, as countries may be added or removed over time as economic situations change.
The Similarity Check Advisory Group met a number of times last year to discuss current and emerging originality issues with text-based content. During those meetings, the topic of image integrity was highlighted as an area of growing concern in scholarly communications, particularly in the life sciences.
Over the last few months, we have also read with interest the recommendations for handling image integrity issues by the STM Working Group on Image Alteration and Duplication Detection, followed closely image integrity sleuths such as Elizabeth Bik and have, like many of you, noticed that image manipulation is increasingly given as the reason for retractions.
Image integrity issues are often associated with paper mill activity but can also originate from an individual’s intentional or unintentional unethical behaviour. Currently, such issues with figures and images are being identified manually or by using an image integrity tool, comparing images within the same article and/or the publisher’s past publications only - and we know that this is a source of frustration for the Crossref members we have spoken to.
What next ?
As reported in Nature last December, we believe Crossref is in a unique position to spearhead a cross-publisher solution, similar to what we do for text-based originality checking, as part of our Similarity Check service.
Before we start exploring potential software options, we need your help to understand:
the scale of the issues and whether these are focused on specific disciplines
the type of issues we should prioritise e.g. duplication, beautification, rotation, plagiarism, GAN-generated images/deep-fakes, etc.
what software (if any) members are using or trialling
whether a cross-publisher service with the collective benefit of shared images would be of sufficient interest to the community
✏️ Let us know what your experience and thoughts are on image integrity by completing this survey.
We’re planning to complete our research and share with you the results along with our proposed next steps soon.