Blog

Dominika Tkaczyk

Dominika joined Crossref’s R&D team in August 2018 as a Principal R&D Developer. Within her first few years at Crossref, she focused primarily on the research and development of metadata matching strategies, to enrich the Research Nexus network with new relationships. In 2024 Dominika became Crossref’s Director of Data Science and launched the Data Science Team. The goal of the Data Science Team is to explore new possibilities for using the data to serve the scholarly community, continue the enrichment of the scholarly record with more metadata and relationships, and develop strong collaborations with like-minded community initiatives. Before joining Crossref, Dominika was a researcher and a data scientist at the University of Warsaw, Poland, and a postdoctoral researcher at Trinity College Dublin, Ireland. She received a PhD in Computer Science from the Polish Academy of Sciences in 2016 for her research on metadata extraction from full-text documents using machine learning and natural language processing techniques.

Read more about Dominika Tkaczyk on their team page.

What if I told you that bibliographic references can be structured?

Last year I spent several weeks studying how to automatically match unstructured references to DOIs (you can read about these experiments in my previous blog posts). But what about references that are not in the form of an unstructured string, but rather a structured collection of metadata fields? Are we matching them, and how? Let’s find out.

Reference matching: for real this time

In my previous blog post, Matchmaker, matchmaker, make me a match, I compared four approaches for reference matching. The comparison was done using a dataset composed of automatically-generated reference strings. Now it’s time for the matching algorithms to face the real enemy: the unstructured reference strings deposited with Crossref by some members. Are the matching algorithms ready for this challenge? Which algorithm will prove worthy of becoming the guardian of the mighty citation network? Buckle up and enjoy our second matching battle!

Matchmaker, matchmaker, make me a match

Matching (or resolving) bibliographic references to target records in the collection is a crucial algorithm in the Crossref ecosystem. Automatic reference matching lets us discover citation relations in large document collections, calculate citation counts, H-indexes, impact factors, etc. At Crossref, we currently use a matching approach based on reference string parsing. Some time ago we realized there is a much simpler approach. And now it is finally battle time: which of the two approaches is better?

What does the sample say?

At Crossref Labs, we often come across interesting research questions and try to answer them by analyzing our data. Depending on the nature of the experiment, processing over 100M records might be time-consuming or even impossible. In those dark moments we turn to sampling and statistical tools. But what can we infer from only a sample of the data?