Blog

Dominika Tkaczyk

Dominika joined Crossref’s R&D team in August 2018 as a Principal R&D Developer. Within her first few years at Crossref, she focused primarily on the research and development of metadata matching strategies, to enrich the Research Nexus network with new relationships. In 2024 Dominika became Crossref’s Director of Data Science and launched the Data Science Team. The goal of the Data Science Team is to explore new possibilities for using the data to serve the scholarly community, continue the enrichment of the scholarly record with more metadata and relationships, and develop strong collaborations with like-minded community initiatives. Before joining Crossref, Dominika was a researcher and a data scientist at the University of Warsaw, Poland, and a postdoctoral researcher at Trinity College Dublin, Ireland. She received a PhD in Computer Science from the Polish Academy of Sciences in 2016 for her research on metadata extraction from full-text documents using machine learning and natural language processing techniques.

Read more about Dominika Tkaczyk on their team page.

How good is your matching?

https://doi.org/10.13003/ief7aibi In our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it’s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done! How about we start with a quiz?

The myth of perfect metadata matching

https://doi.org/10.13003/pied3tho In our previous instalments of the blog series about matching (see part 1 and part 2), we explained what metadata matching is, why it is important and described its basic terminology. In this entry, we will discuss a few common beliefs about metadata matching that are often encountered when interacting with users, developers, integrators, and other stakeholders. Spoiler alert: we are calling them myths because these beliefs are not true!

The anatomy of metadata matching

https://doi.org/10.13003/zie7reeg In our previous blog post about metadata matching, we discussed what it is and why we need it (tl;dr: to discover more relationships within the scholarly record). Here, we will describe some basic matching-related terminology and the components of a matching process. We will also pose some typical product questions to consider when developing or integrating matching solutions. Basic terminology Metadata matching is a high-level concept, with many different problems falling into this category.

Metadata matching 101: what is it and why do we need it?

https://doi.org/10.13003/aewi1cai At Crossref and ROR, we develop and run processes that match metadata at scale, creating relationships between millions of entities in the scholarly record. Over the last few years, we’ve spent a lot of time diving into details about metadata matching strategies, evaluation, and integration. It is quite possibly our favourite thing to talk and write about! But sometimes it is good to step back and look at the problem from a wider perspective.

Discovering relationships between preprints and journal articles

Dominika Tkaczyk

Dominika Tkaczyk – 2023 December 07

In PreprintsLinking

In the scholarly communications environment, the evolution of a journal article can be traced by the relationships it has with its preprints. Those preprint–journal article relationships are an important component of the research nexus. Some of those relationships are provided by Crossref members (including publishers, universities, research groups, funders, etc.) when they deposit metadata with Crossref, but we know that a significant number of them are missing. To fill this gap, we developed a new automated strategy for discovering relationships between preprints and journal articles and applied it to all the preprints in the Crossref database. We made the resulting dataset, containing both publisher-asserted and automatically discovered relationships, publicly available for anyone to analyse.

The more the merrier, or how more registered grants means more relationships with outputs

One of the main motivators for funders registering grants with Crossref is to simplify the process of research reporting with more automatic matching of research outputs to specific awards. In March 2022, we developed a simple approach for linking grants to research outputs and analysed how many such relationships could be established. In January 2023, we repeated this analysis to see how the situation changed within ten months. Interested? Read on!

Follow the money, or how to link grants to research outputs

The ecosystem of scholarly metadata is filled with relationships between items of various types: a person authored a paper, a paper cites a book, a funder funded research. Those relationships are absolutely essential: an item without them is missing the most basic context about its structure, origin, and impact. No wonder that finding and exposing such relationships is considered very important by virtually all parties involved. Probably the most famous instance of this problem is finding citation links between research outputs. Lately, another instance has been drawing more and more attention: linking research outputs with grants used as their funding source. How can this be done and how many such links can we observe?

Double trouble with DOIs

Detective Matcher stopped abruptly behind the corner of a short building, praying that his loud heartbeat doesn’t give up his presence. This missing DOI case was unlike any other before, keeping him awake for many seconds already. It took a great effort and a good amount of help from his clever assistant Fuzzy Comparison to make sense of the sparse clues provided by Miss Unstructured Reference, an elegant young lady with a shy smile, who begged him to take up this case at any cost.

Crossref metadata for bibliometrics

Our paper, Crossref: the sustainable source of community-owned scholarly metadata, was recently published in Quantitative Science Studies (MIT Press). The paper describes the scholarly metadata collected and made available by Crossref, as well as its importance in the scholarly research ecosystem.

What’s your (citations’) style?

Bibliographic references in scientific papers are the end result of a process typically composed of: finding the right document to cite, obtaining its metadata, and formatting the metadata using a specific citation style. This end result, however, does not preserve the information about the citation style used to generate it. Can the citation style be somehow guessed from the reference string only? TL;DR I built an automatic citation style classifier. It classifies a given bibliographic reference string into one of 17 citation styles or “unknown”.