https://doi.org/10.13003/axeer1ee
In our previous entry, we explained that thorough evaluation is key to understanding a matching strategy’s performance. While evaluation is what allows us to assess the correctness of matching, choosing the best matching strategy is, unfortunately, not as simple as selecting the one that yields the best matches. Instead, these decisions usually depend on weighing multiple factors based on your particular circumstances. This is true not only for metadata matching, but for many technical choices that require navigating trade-offs.
Looking back over 2024, we wanted to reflect on where we are in meeting our goals, and report on the progress and plans that affect you - our community of 21,000 organisational members as well as the vast number of research initiatives and scientific bodies that rely on Crossref metadata.
In this post, we will give an update on our roadmap, including what is completed, underway, and up next, and a bit about what’s paused and why.
The Crossref2024 annual meeting gathered our community for a packed agenda of updates, demos, and lively discussions on advancing our shared goals. The day was filled with insights and energy, from practical demos of Crossref’s latest API features to community reflections on the Research Nexus initiative and the Board elections.
Our Board elections are always the focal point of the Annual Meeting. We want to start reflecting on the day by congratulating our newly elected board members: Katharina Rieck from Austrian Science Fund (FWF), Lisa Schiff from California Digital Library, Aaron Wood from American Psychological Association, and Amanda Ward from Taylor and Francis, who will officially join (and re-join) in January 2025.
Background The Principles of Open Scholarly Infrastructure (POSI) provides a set of guidelines for operating open infrastructure in service to the scholarly community. It sets out 16 points to ensure that the infrastructure on which the scholarly and research communities rely is openly governed, sustainable, and replicable. Each POSI adopter regularly reviews progress, conducts periodic audits, and self-reports how they’re working towards each of the principles.
In 2020, Crossref’s board voted to adopt the Principles of Open Scholarly Infrastructure, and we completed our first self-audit.
Not sure if you’re using iThenticate v1 or iThenticate v2? More here.
Not sure whether you’re an account administrator? Find out here.
The Submitted Works repository (or Private Repository) is a new feature in iThenticate v2 which is now available to Crossref members. This feature allows users to find similarity not just across Turnitin’s extensive Content Database but also across all previous manuscripts submitted to your iThenticate account for all the journals you work on. This would allow you to find collusion between authors or potential cases of duplicate submissions.
How does this work?
You have received a manuscript from Author 1 and have decided to index this manuscript into your Submitted Works repository. At some point later you receive a new manuscript from Author 2. When generating your Similarity Report, you have decided to check against your Submitted Works repository. There is a paragraph in the manuscript from Author 2 which matches a paragraph in the manuscript from Author 1. This would be highlighted within your Similarity Report as a match against your Submitted Works repository.
By clicking on this match you can see the full text of the submission you’ve matched against:
And details about the submission, such as the name and email address of the user who submitted it, the date it was submitted and the title of the submission:
The ability to see the full source text and the details can both be switched off individually.
As with all matches, they can be excluded from the Sources Overview panel or you can turn off matches against all Submitted Works from the settings: