Providing insights beyond mainstream research assessment objects in Computer Science
THE INSIDE STORIES PROVIDE INSIGHT INTO THE GRASPOS PILOT STUDIES, PRESENTING THE AIMS AND CURRENT STATUS OF THEIR ACTIVITIES. LEARN ABOUT WHAT THE PILOTS ARE UP TO AND HOW THEY AIM TO BRING OPEN SCIENCE AND RESEARCH ASSESSMENT TOGETHER.
We had the pleasure to discuss with Laurent Romary (Inria),Silvio Peroni (University of Bologna), and Angelo Di Iorio (University of Bologna) about the Computer Science pilot in GraspOS.
Laurent Romary.
Silvio Peroni
Angelo Di Iorio
The pilot study covers the higher education and research community in computer science at large, and aims to analyse its scholarly specificities. In particular, the study focuses on how the community can bring specific insights in the domain of Open Science and research assessment in relation to its strong digital background.
Hello Laurent, angelo and silvio, can you tell us more about some of the challenges in the current research assessment system which prompted the development of this pilot study?
Computer science has been a long standing domain where openness was at stake, for instance when developing open source software within a group of distributed developers or with the early development of online capabilities allowing the open dissemination of research papers and, beyond, of any kind of research production. Still, some essential characteristics of research in computer science are not taken care of in traditional cross-domain research assessment practices, in particular the importance of software as a research production that can be cited as a major achievement in a career. Another aspect which is usually missed is the importance of conferences as a major publication channel in computer science, sometimes surpassing journals in quality and fame in certain highly evolving fields such as natural language processing.
Are there specific aspects of open science which you look into in relation to responsible research assessment?
Our priority has been to provide an exhaustive landscape analysis of open science practices in computer science and to list up all the existing initiatives or infrastructures (e.g. Software Heritage, DBLP) that can serve as a basis for a better coverage of our field from the point of view of research assessment.
What activities has the pilot carried out, and how can these support the move towards a research assessment system that takes Open Science into account?
Our group has produced a comprehensive situation analysis in the form of a preprint report from which we identified a few experiments that could be carried out, for instance to provide a software usage dashboard for researchers and institutions. This dashboard is planned to be based upon the identification of software mentions in publications, with the perspective of eliciting both software reuse and dissemination within our community.
By focusing on software in the computer science field, which may sound like stating the obvious, we also expect to provide insights for other communities where software plays an increasing role, in particular for the production, analysis and dissemination of research data. This may contribute to a better description of a wide range of research objects in scientific assessment at large.
Is there anything else you would like to highlight about the pilot study?
There is still a long way between setting up the kind of experiments of proofs of concepts we have in mind and integrating this in stable research assessment infrastructures. In particular, we need to see in the future how objects such as software or conferences can be fully part of open information sources such as Open Citations or OpenAlex.
Thank you for your time!