The Future of Academic Publishing – An Interview with Stefan Gradmann
The print culture is being increasingly abandoned in some branches of academia. Stefan Gradmann, president of the German Society for Information Science and Information Practice, outlines the future of academic publishing.
Mr Gradmann, what do you think the future holds for academic publishing?
That of course depends to a very great extent on the cultures that exist in the various subjects. It is already noticeable that some academics are leaving the Gutenberg galaxy we have built up over the past few centuries.
What concrete form does this take?
The most prominent examples are to be found in the field of biomedicine, where we now see a paradigm that was created by the Dutch publisher Jan Velterop and is known as “nanopublications” – systems of statements that are published in the form of linked data.
The statements are no longer even combined to create a narrative in article form; instead, simply the research data are published – together with the methods that can operate via these data. Ultimately, this renders the distinction between publication and primary data obsolete. That is the one extreme.
And the other?
On the other hand we have the classic hermeneutic humanities that are still very strongly interwoven with traditional publication formats such as monographs. Such formats – even if they appear in electronic form nowadays – are designed and thought of as analogous to print. The print culture here still has many years ahead of it.
Is this coexistence problematic?
The problem here is that the print paradigm becomes more and more expensive the less it is used. In terms of the economics of publication, there will be growing pressure to change. What is more, this pressure will certainly extend well beyond what we call open access nowadays. In any case, today’s open access model is actually merely a change in finance flows; the publication process itself has not changed in terms of its quality.
What advantages do the new publication formats offer?
Plenty. The most important advantage is that they allow things to be processed by machine. There is a nice essay by Gregory Crane, who set up the Perseus Digital Library, entitled: What Do You Do with a Million Books?. If you have digitized a million books, you cannot read them all, but you can have a machine process them and make analyses that would not be possible with traditional analogue reading.
Another advantage lies in the ease, speed and low costs of reproduction. The publication is accessible to far more people simultaneously, and it is thus also possible to publish much more.
Will these formats change the role of publishers?
I could imagine that publishers will increasingly establish a business model in the area of selection and aggregation services. In the very near future we are likely to end up with too many resources in digital form, with the result that we cannot physically take them all in. In other words, someone will have to collate all this information and prepare it in such a way that it can be processed by humans. This is something publishers could do – and large academic publishing houses like Elsevier and de Gruyter are already investing in this area.
Which output media will play a role?
They will all be Web-based digital output media, and the question of media format will become ever less relevant. As our reading devices become better and better, I believe that we will be accessing a great deal of academic literature in digital form only.
Which technical problems still need to be resolved?
It all starts with the fact that it is difficult – for example in the linked data standard RDF – to determine who said what and when. Work is currently ongoing to develop a solution for this. Another major issue is trustworthiness – has a resource been changed or not? Web formats are highly dynamic, so ensuring that a publication is presented in precisely the form in which the author intended is no trivial matter.
Then there is the question of authorization scenarios. There is some content that is supposed to be visible to only very few people at first; later, however, once the research process is complete, it should be accessible to everyone. Such scenarios were not needed in the past because there was always a degree of media discontinuity that ensured that only that which was printed on paper was publicly accessible.
Are there also some legal problems?
Yes, that’s a tricky subject. Our copyright law is still nationally based, which simply doesn’t work on the Web. We have to give a great deal of thought to how we can create copyright law that can be implemented on the Web. After all, our current strategy of simply translating the copyright laws from the analogue era to the digital world is unlikely to get us very far.
has conducted the interview. She works as a freelance journalist in Bonn.
Translation: Chris Cave
Copyright: Goethe-Institut e. V., Internet-Redaktion
Any questions about this article? Please write to us!