In the first half of this year we’ve been talking to our community about post-publication changes and Crossmark. When a piece of research is published it isn’t the end of the journey—it is read, reused, and sometimes modified. That’s why we run Crossmark, as a way to provide notifications of important changes to research made after publication. Readers can see if the resesarch they are looking at has updates by clicking the Crossmark logo.
We’re happy to note that this month, we are marking five years since Crossref launched its Grant Linking System. The Grant Linking System (GLS) started life as a joint community effort to create ‘grant identifiers’ and support the needs of funders in the scholarly communications infrastructure.
The system includes a funder-designed metadata schema and a unique link for each award which enables connections with millions of research outputs, better reporting on the research and outcomes of funding, and a contribution to open science infrastructure.
In our previous blog post about metadata matching, we discussed what it is and why we need it (tl;dr: to discover more relationships within the scholarly record). Here, we will describe some basic matching-related terminology and the components of a matching process. We will also pose some typical product questions to consider when developing or integrating matching solutions.
Basic terminology Metadata matching is a high-level concept, with many different problems falling into this category.
Update 2024-07-01: This post is based on an interview with Euan Adie, founder and director of Overton._
What is Overton? Overton is a big database of government policy documents, also including sources like intergovernmental organizations, think tanks, and big NGOs and in general anyone who’s trying to influence a government policy maker. What we’re interested in is basically, taking all the good parts of the scholarly record and applying some of that to the policy world.
In 2022, we flagged up some changes to Similarity Check, which were taking place in v2 of Turnitin’s iThenticate tool used by members participating in the service. We noted that further enhancements were planned, and want to highlight some changes that are coming very soon. These changes will affect functionality that is used by account administrators, and doesn’t affect the Similarity Reports themselves.
From Wednesday 3 May 2023, administrators of iThenticate v2 accounts will notice some changes to the interface and improvements to the Users, Groups, Integrations, Statistics and Paper Lookup sections.
Logging in
iThenticate v2 account administrators and browser users will see a new login page when logging in to iThenticate v2:
A refreshed interface
Once logged in to iThenticate v2, account administrators will see an updated design, with improved notifications to let them know whether a task/action has been successfully completed or not.
Users
There will be improvements to the user management system for account administrators, including a much clearer navigation menu for managing active, pending and deactivated users.
There will also be a filtering option on the Users page to search for active, pending and deactivated users by first name, last name, email address, group and date added. In addition coloured labels will be introduced to easily identify the level of access (or ‘Role’) for each user.
An improved bulk user import process will be available, with clearer guidance on any issues that may arise during the upload. This new development will also include new screens for adding and editing users with more notifications to help prevent mistakes.
Integrations
For account administrators managing peer review management system integrations and needing to generate API keys, the Integrations page will be improved to make copying API keys simpler.
Statistics
iThenticate v2 administrators will also notice some improvements to the Statistics page. Usage data should load faster and will be sortable by user group. They will also be able to generate large usage reports of over 100k submissions.
Paper lookup
The Paper lookup will allow iThenticate v2 account administrators to find submissions that have been made from any integration connected to their iThenticate v2 account. They can be found by searching the paper ID (or oid number) of the submission.
Please note: the ability to search for submissions by the user’s name is available for manuscripts submitted via the iThenticate v2 website only and not for papers submitted via an integration.
New password requirements
To improve the security of users’ accounts, new password requirements will be introduced, including a minimum of 8 symbols, 1 special symbol, 1 upper case letter, and 1 number.
Next in iThenticate v2
Turnitin, who produce iThenticate, are currently working on a number of new features and developments including an improved similarity report, paraphrase and AI writing detection. A detailed timeline is not yet available but we’ll be updating you on these new developments in the coming months.
✏️ Do get in touch via support@crossref.org if you have any questions about iThenticate v1 or v2 or start a discussion by commenting on this post below.