2014 CrossRef annual meeting and workshops
Article information
The 2014 CrossRef annual meeting and workshops were held at the Royal Society in London from November 11 to 12. They were lively and international events with over 150 attendees from many countries, including Asian countries such as Korea, China, Japan, and India.
On the first day, the CrossRef workshops were held. Two parallel sessions were organized in the morning. One was for people who were new to CrossRef and the other was for those who were familiar with the technical aspects of CrossRef. Since I was new to CrossRef, I attended the beginner’s session titled “Boot camp: an introduction to CrossRef.” Three speakers from CrossRef, Carol Meyer, Anna Tolwinska, and Susan Collins, gave presentations on the general introduction of CrossRef, current tools for small publishers, and CrossCheck plagiarism screening respectively. It was a good opportunity for beginners to learn about the CrossRef organization and its products. I thought it was quite impressive that such a small organization founded just 14 years ago had grown into a very important one leading technological innovations in scholarly publishing of the world (Fig. 1).
In the other parallel session, on system update, Chuck Koscher, Patricia Feeney, Mike Yalter, and Patricia Feeney gave four successive presentations with the titles “System update: an indepth and technical look at CrossRef,” “Support update and multiple resolution overview,” “DOI co-access for books,” and “Reports and how to use them.” This was a technical session mainly for practitioners of the CrossRef products. The multiple resolution is about assigning multiple URLs to a single digital object identifier (DOI). It seems to be important to develop efficient tools for including books into the information space.
A plenary session titled “What’s new at CrossRef” was held in the afternoon. Three speakers from CrossRef gave presentations. Geoffrey Bilder gave a talk on Text and Data Mining (TDM) application programming interface (API). TDM, a beta service of which was launched in 2014, was an important keyword in this year’s CrossRef meeting. I thought this was a wonderful idea that could help researchers greatly. Bilder explained that to develop a TDM API, it was necessary to have an access to full texts and license information. Then he described the TDM workflow and showed a demo. I felt that this would grow to become an important and widely-used product in the near future.
In the second talk of the plenary session, Kirsty Meddings gave a presentation on new developments in CrossMark. The CrossMark logo identifies a publisher-maintained copy of a piece of content and provides important publication record information. A new development was that information on linked clinical trials was added.
In the third talk, Karl Ward gave a presentation on small publisher tools. These tools are mainly for small publishers or publishers in developing countries, which have difficulty integrating CrossRef services. He explained that CrossRef depositor was being developed to extract the reference list from portable document format (PDF) files and add to existing DOIs. I found that his summary of the CrossRef’s mandate to be the citation linking backbone for all scholarly information in electronic form was very nice and concise.
After the plenary session, there were two parallel sessions, one on CrossCheck introduction & development roadmap given by Rachael Lammey and Shivendra Naidoo, and the other on book interest group given by Carol Meyer and Jennifer Kemp.
On the second day, the CrossRef annual meeting was held. Ed Pentz, the executive director of CrossRef, gave the first talk, titled “Introduction and CrossRef overview,” where he explained the current situation of CrossRef including its financial situation. Chuck Koscher next gave a talk on system update and Geoffrey Bilder spoke about the strategic initiatives update. Bilder described the strategic initiative life cycle and many new initiatives at CrossRef, which included DOI event tracking, Wikipedia linking & outreach, linked clinical trials, small publisher tools, CrossRef depositor, text and data mining, CrossRef representational state transfer API, and linking data and publications.
In the CrossRef flash update session, five speakers from CrossRef, who were Carol Meyer, Rachael Lammey, Kirsty Meddings, Karl Ward, and Ed Pentz, gave presentations on “Branding,” “CrossCheck & CrossRef Text and Data Mining,” “CrossMark & FundRef,” “CrossRef Metadata Search,” and “Open Researcher and Contributor ID (ORCID)” respectively. Since there are so many people with the same name as mine in Korea, I have a strong interest in the concept of ORCID. Currently, an author with a specific ORCID claims a chosen set of papers as his or hers. There are inevitably false or inaccurate claims. I find it will be useful to have an efficient system to reduce the errors in the identification of the authors.
In the afternoon, six invited speakers gave talks. In the keynote speech titled “Ways and needs to promote rapid data sharing”, Laurie Goodman of GigaScience emphasized the importance of prepublication data sharing, especially in biomedical fields. She explained various obstacles in publishing and sharing scientific data and claimed that open data sharing would benefit the authors as well as the general public and reduce the publication of false and irreproducible results. Data publishing is not a new concept in physical science such as experimental high energy physics and experience in those fields can be useful.
Next, the speeches of four invited speakers and a panel discussion on the theme of improving peer review were organized. In the talk titled “Securing trust and transparency in peer review,” Adam Etkin of peer review evaluation (PRE) presented about the activities of PRE. He emphasized that in the current environment where there were many predatory journal publishers and prominent cases of faulty research being published, it was important to maintain the quality of peer review. As a way to create incentives to use best practices in peer review, PRE works with the publisher to provide independent validation of the review process and issues an endorsement that quality peer review has been conducted.
In the second talk of the panel with the title “bioRxiv: the preprint server for biology,” Richard Sever of Cold Spring Harbor Laboratory Press introduced the bioRxiv, a new preprint server in biology launched in 2013. This was modeled after the physics preprint archive launched in 1991. A remarkable new feature is that a DOI is assigned to each submitted preprint, so that it can be cited easily.
In the third talk of the panel, Mirjam Curno of Frontiers spoke about Frontiers’ collaborative peer review. Frontiers publish 48 open access journals, where they adopt a new peer review system consisting of the independent review phase and the interactive review phase. Curno said that there were about 50,000 editors for their journals. The new feature is that during the interactive review phase, authors and editors can interact with each other openly through real-time comments in the discussion forum. Once a paper is accepted, the reviewers’ names are disclosed. Recently, there have been many discussions about the deficiencies of the conventional peer review system. I felt that the attempts being made at Frontiers would be valuable and might provide new directions in peer review.
Another interesting attempt at a new peer review system was presented in the talk “Do it once, do it well-questioning submission and peer review traditions” by Janne-Tuomas Seppänen of Peerage of Science. In this case, authors submit a manuscript to Peerage of Science, before submitting to any journal. Once submitted, any qualified peer can engage to review the manuscript. Peer reviews are themselves peer reviewed, increasing and quantifying the quality of peer review. The peer review process is available to all subscribing journals. Authors may accept a publishing offer from a subscribing journal, or choose to export the peer reviews to any journal.
Finally, in the closing keynote speech, Richard Jefferson of Cambia gave a talk titled “Innovation cartography: creating impact from scholarly research requires maps not metrics.” He presented an online service named the Lens, which he created. This service provides a large number of patent documents that are integrated with academic literature as open resources. Jefferson claimed that this would allow document collections, aggregations, and analyses to be shared, annotated, and embedded to forge open mapping of the world of knowledge-directed innovation. In addition to this somewhat abstract concept of innovation cartography, I also learned about the interesting story of Jan Huyghen van Linschoten, who stole the technology of maritime cartography from Portuguese and published a book about it in 1596. Jefferson claimed that the sharing of the knowledge of crucial technology made possible by this act paved the way for rapid developments in shipbuilding, sailing, logistics, cartography, navigation, insurance, investment, and finance.
In summary, I found many presentations in this meeting to be quite useful and interesting. CrossRef seems to be based on a very simple but extremely useful idea that all academic contents can be connected to each other through the use of DOIs. In the age of the Internet, an efficient interconnection not only is feasible, but also produces new information and insight. Having a good idea is very important. More important, however, is to pursue and implement the idea and produce something useful for human beings, as the people at CrossRef have done.
Notes
No potential conflict of interest relevant to this article was reported.
Acknowledgements
This work was supported by a travel grant from the Korean Council of Science Editors (2014).