Sponsors make Crossref membership accessible to organizations that would otherwise face barriers to joining us. They also provide support to facilitate participation, which increases the amount and diversity of metadata in the global Research Nexus. This in turn improves discoverability and transparency of scholarship behind the works.
We are looking to work with an individual or organization to perform an audit of, and propose changes to, the structure and information architecture underlying our website, with the aim of making it easier for everyone in our community to navigate the website and find the information they need.
Proposals will be evaluated on a rolling basis. We encourage submissions by May 15, 2025.
At the end of last year, we were excited to announce our renewed commitment to community and the launch of three cross-functional programs to guide and accelerate our work. We introduced this new approach to work towards better cross-team alignment, shared responsibility, improved communication and learning, and make more progress on the things members need.
This year, metadata development is one of our key priorities and we’re making a start with the release of version 5.4.0 of our input schema with some long-awaited changes. This is the first in what will be a series of metadata schema updates.
What is in this update?
Publication typing for citations
This is fairly simple; we’ve added a ‘type’ attribute to the citations members supply. This means you can identify a journal article citation as a journal article, but more importantly, you can identify a dataset, software, blog post, or other citation that may not have an identifier assigned to it. This makes it easier for the many thousands of metadata users to connect these citations to identifiers. We know many publishers, particularly journal publishers, do collect this information already and will consider making this change to deposit citation types with their records.
SEMANTIC WEB: GOOGLE HAS THE ANSWERS, BUT NOT THE QUESTIONS
The Google v. Semantic Web discussion at the AAAI (American Association for Artificial Intelligence) featured plenty of confrontation and even some rational argument, but it may chiefly be remembered as the day when Google responded to the challenge of semantic web thinking by saying that the semantic web movement did not matter - thereby demonstrating that it did.
by David Worlock, Chairman
And we thought that the real battle this year was between net neutrality and the network owners. Or between those who think that click fraud crucially undermines Google, and those who think it doesn’t matter. We were wrong. July’s “Thrilla in Manila” was the discussion between Tim Berners-Lee and the Google Director of Search, Peter Norvig, at the Boston AAAI meeting. And it is an important moment because Berners-Lee’s assertion that the last semantic web building blocks are moving into place comes at exactly the time when Google seems anxious to diminish semantic web searching. It is a good guess that the latter results from a stimulus dictated by threat. A world where keyword searching was reduced to ground floor in a building of many storeys where it may even be an advantage to be a new market entrant with no history is a world where Google would have to progressively re-invent itself. And what is more difficult, in the recent history of these things, than a company created by a technology re-inventing itself in terms of a new technology?
So Google’s Boston blows were first of all aimed at the reality test. Like STM publishers pointing to the unlikelihood of academic researchers adding metadata to articles for repository filing, Google pointed to user and webmaster incompetence as the chief reason why semantic interoperability was doomed to a long, slow and painful generative process. If users cannot configure a server or write HTML, how can they understand all this stuff? And then suppliers would slow it down by trying to make it proprietary. And then, machine to machine interoperability would encourage deception (obviously the click fraud business is hurting). The answer to the Semantic Web, from a Google stance, thus appears to be: very interesting, but not very soon.
Dancing like a bee and stinging like a butterfly, Tim Berners-Lee clearly had the answers to this. The reason why the semantic web appears threatening to those who have entrenched tenancies in search is probably because it is going quicker than expected. His original ‘layer cake’ diagram, a feature on the conference circuit for five years, could now be completed at all levels. RDF as a data language is now well-established (think of RSS). Ontologies, mostly in narrow vertical domains, are moving into place, though there may be issues about relating them to each other. Query and rules languages now populate the other layers, with one of the former, SPARQL, emerging this year as a W3C candidate recommendation (6 April 2006). In a real sense this is the missing link which makes the Semantic Web a viable proposition, and at the same time joins it to the popular hubbub around Web 2.0. If part of the latter dream is data sourcing from a wide variety of service entities to create new web environments from composite content, then SPARQL sitting on top of RDF looks closest to realising that idea. In an important note in O’Reilly XML.com (SPARQL: Web 2.0 Meet the Semantic Web; 16 September 2005), Kendall Clark wrote “Imagine having one query language, and one client, that lets you arbitrarily slice the data of Flickr, del.icio.us, Google, and your three other favourite Web 2.0 sites, all FOAF files, all of the RSS 1.0 feeds (and, eventually, I suspect, all Atom 1.0 feeds) plus MusicBrainz etc”.
Imagining that might well impel you into the ring with Tim Berners-Lee. If Google has to be re-invented, the process of recognition of change has to be slowed. Denying the speedy reality of the semantic web becomes essential while furious R&D takes place. And content and information service providers are not just spectators of this, but participants too.