In January 2026, our new annual membership fee tier takes effect. The new tier is US$200 for member organisations that operate on publishing revenue or expenses (whichever is higher) of up to US$1,000 annually. We announced the Board’s decision, making it possible in July, and––as you can infer from Amanda’s latest blog––this is the first such change to the annual membership fee tiers in close to 20 years!
The new fee tier resulted from the consultation process and fees review undertaken as part of the Resourcing Crossref for Future Sustainability program, carried out with the help of our Membership and Fees Committee (made up of representatives from member organisations and community partners). The program is ongoing, and the new fee tier, intended to make Crossref membership more accessible, is one of the first changes it helped us determine.
It has been 18 (!) years since Crossref last deprecated a metadata schema. In that time, we’ve released numerous schema versions, some major updates, and some interim releases that never saw wide adoption. Now, with 27 different schemas to support, we believe it’s time to streamline and move forward.
Starting next year, we plan to begin the process of deprecating lightly-used schemas, with the understanding that this will be a multi-year effort involving careful planning and plenty of communication.
Scholarly metadata, deposited by thousands of our members and made openly available can act as “trust signals” for the publications. It provides information that helps others in the community to verify and assess the integrity of the work. Despite having a central responsibility in ensuring the integrity of the work that they publish, editorial teams tend not be fully aware of the value of metadata for integrity of the scholarly record. How can we change that?
Crossref was created back in 2000 by 12 forward-thinking scholarly publishers from North America and Europe, and by 2002, these members had registered 4 million DOI records. At the time of writing, we have over 23,600 members in 164 different countries. Half of our members are based in Asia, and 35% are universities or scholar-led. These members have registered over 176 million open metadata records with DOIs (as of today). What a difference 25 years makes!
In our 25th anniversary year, I thought it would be time to take a look at how we got here. And so—hold tight—we’re going to go on an adventure through space and time1, stopping every 5 years through Crossref history to check in on our members. And we’re going to see some really interesting changes over the years.
To work out which version you’re on, take a look at the website address that you use to access iThenticate. If you go to ithenticate.com then you are using v1. If you use a bespoke URL, https://crossref-[your member ID].turnitin.com/ then you are using iThenticate 2.0.
v1 Creating and finding your Similarity Report, keep reading:
For each document you submit for checking, the Similarity Report provides an overall similarity breakdown. This is displayed in the form of percentage of similarity between the document and existing published content in the iThenticate database. iThenticate’s repositories include the published content provided by Crossref members, plus billions of web pages (both current and archived content), work that has previously been submitted to Turnitin, and a collection of works including thousands of periodicals, journals, publications.
Matches are highlighted, and the best matches are listed in the report sidebar. Other matches are called underlying sources, and these are listed in the content tracking mode. Learn more about the different viewing modes (Similarity Report mode, Content tracking mode, Summary report mode, Largest matches mode).
If two sources have exactly the same amount of matching text, the best match depends on which content repository contains the source of the match. For example, for two identical internet source matches, the most recently crawled internet source would be the best match. If an identical match is found to an internet source and a publication source, the publication source would be the best match.
Accessing the Similarity Report (v1)
To access the Similarity Report through iThenticate, start from the folder that contains the submission, and go to the Documents tab. In the Report column, you will see a button - click this Similarity Score to open the document in the Document Viewer.
The Document Viewer (v1)
The Document Viewer screen opens in the last used viewing mode. There are three sections:
Along the top of the screen, the document information bar shows details about the submitted document. This includes the document title, the date the report was processed, the word count and the number of matching sources found in the selected databases.
The left panel is the document text. This shows the full text of the submitted document, highlighting areas of overlap with existing published content.
The colors correspond to the matching sources, listed in the sources panel on the right.
The layout will depend on your chosen report mode:
Match Overview (show highest matches together) shows the best matches between the submitted document and content from the selected search repositories. Matches are color-coded and listed from highest to lowest percentage of matching word area. Only the top or best matches are shown - you can see all other matches in the Match Breakdown and All Sources modes.
All Sources shows matches between the submission and a specifically selected source from the content repositories. This is the full list of all matches found, not just the top matches per area of similarity, including those not seen in the Match Overview because they are the same or similar to other areas which are better matches.
Match Breakdown shows all matches, including those that are hidden by a top source and therefore don’t appear in Match Overview. To see the underlying sources, hover over a match, and click the arrow icon. Select a source to highlight the matching text in the submitted document. Click the back arrow next to Match Breakdown to return to Match Overview mode.
Side-By-Side Comparison is an in-depth view that shows a document’s match compared side-by-side with the original source from the content repositories. From the All Sources view, choose a source from the sources panel, and a source box highlights on the submitted document similar content within a snippet of the text from the repository source. In Match Overview, select the colored number at the start of the highlighted text to open this source box. To see the entire repository source, click Full Source View, which opens the full-text of the repository source in the sources panel and all the matching instances. The sidebar shows the source’s full text with each match to the document highlighted in red. Click the X icon in the top right corner of the full source text panel to close it.
Use the view mode icons to switch between the Match Overview (default, left icon) and All Sources Similarity Report viewing modes. Click the right icon to change the Similarity Report view mode to All Sources.
Viewing live web pages for a source (v1)
You may access web-based sources by clicking on the source title/URL. If there are multiple matches to this source, use the arrow icons to quickly navigate through them.
If a source is restricted or paywalled (for example, subscription-based academic resources), you won’t be able to view the full-text of the source, but you’ll still see the source box snippet for context. Some internet sources may no longer be live.
From Match Overview, click the colored number at the start of a piece of highlighted text on the submitted document. A source box will appear on the document text showing the similar content highlighted within a snippet of the text from the repository source. The source website will be in blue above the source snippet - click the link to access it.
From Match Breakdown or All Sources, select the source for which you want to view the website, and a diagonal icon will appear to the right of the source. Click this to access it.
Page maintainer: Kathleen Luschek Last updated: 2020-May-19