[If you arrived here using the short link bit.ly/fear_of_scooping, you might be interested in reading about my first article, “Afraid of Scooping – Case Study on Researcher Strategies against Fear of Scooping in the Context of Open Science“. It was first presented at SciDataCon2016 as a conference paper and then submitted for Data Science Journal as a research paper. It was accepted there with minor revision and is currently in the final stages of editing. I have publised the pre-print in Zenodo.]
ALLEA, the joint organisation for all European academies (also beyond the EU), has published a new version of the European Code of Conduct for Research Integrity. The previous one came out in 2011 and was co-written by ALLEA and the European Science Foundation (ESF), which has since gone through a major organisational transformation and a change of focus.
I am about to attend a seminar in Finland about the new code (starting in an hour!) and wanted quickly leaf it through, so that I get most out of the discussion. So what I’m about to say are super quick impressions on the document. Should I come to reconsider something after a more thorough reading, I will make sure to post it here.
First of all I am really pleased that data is acknowledged as a research output and that citing data is mentioned as a responsible practice:
• Researchers, research institutions and organisations acknowledge data as legitimate and citable products of research.
So yay for that. Unfortunately that was pretty much the only positive surprise. Beyond the parts on data management the document is very much same old, same old. Which is mostly fine in the sense, that I don’t disagree with anything (or maybe one thing, but I’ll get back to that). What is said is not so much the problem as what is left unsaid or unaddressed.
Citizen science, for one. I see that the European Citizen Science Association is one of the recognised stakeholder parties, good thing. But the traditional dichotomy between a research subject and object is in no way challenged. It is obvious that the researcher assumed in the code is someone with academic training and affiliation in an institution, such as a university or a research institute. There is no mention of co-design or anything like that. Either you are a researcher (or what is called a partner, but I see that hardly applying to individual citizens or f.e. informal networks of citizens) or you are a subject (more like an object in this context). Also the relationship between research and society, and the responsibility that the research community holds towards the society that funds it, is hardly touched upon.
There is of course this:
• Researchers publish results and interpretations of research in an open, honest, transparent and accurate manner, and respect confidentiality of data or findings when legitimately required to do so.
But without defining what is meant by open, this is very weak. There is nothing about translating results to the general public, engaging relevant stakeholders into discussion or trying to make sure that the research has impact beyond the impact factor. I think choices between publishers should be seen as an integrity issue: do you choose to disregard the unsustainable and even straight dishonest practices of some publishers in your quest for publications, or do you choose to go to one of the responsible and open ones. This could apply both to predatory publishers and say, corporations bleeding research communities and hogging ownership of publicly funded research outputs while making bigger profits than the oil industry.
Which brings me to the chapter that frustrates me the most, 2.7. Publication and Dissemination. The perversity of the current academic publishing model coupled with the process of gaining merit is the main threat to research integrity: how journal article is in many, if not in most fields the only way to have research results acknowledged, creating incentives for authorship-fraud, data secrecy and p-hacking, among other things. These are all practices that the code aims at preventing! And still journal article is presented as the default way of disseminating research – without naming it, mind you – with no mention, let alone encouragement, to exploring the plethora of other means provided by the digital environment. I guess it could be argued that since the article is not mentioned as the main type of publication, the code can be applied to all kinds of research outputs in the public space. But just trying to fit this supposed one-size-fits-all T-shirt on data authorship shows that it is just too tight.
To summarize: on the one hand the code is too general, like when it doesn’t acknowledge the paradigm shift of citizen science (BTW, I see that as the real paradigm shift, not open science, as you sometimes hear stated), on the other it’s too specific, like when it addresses article authorship issues, but leaves out data authorship, open collaboration articles, websites, games, you name it.
I would have liked to see bolder and more forward looking stances. Now the document is like a guest late at the party, trying to jump in a conversation that is already moving on to other topics. The code is, rightfully, described as “a living document that is updated regularly and that allows for local or national differences in its implementation.” The next update cannot come too soon. Hopefully that one will not be outdated since publication.
This post is based on my conference paper presentation at the SciDataCon 2016. The paper was part of a session called “Getting the incentives right: removing social, institutional and economic barriers to data sharing“. My slides can be seen here bit.ly/fearofscooping. The abstract for the paper has been published on this blog here. I will be submitting a full paper for the Data Science Journal special SciDataCon 2016 collection. A preprint will be available on Zenodo on 1 October 2016.
Getting scooped, having your research idea or results published by someone else, is a common fear among researchers. It can be a major stress factor and an energy drain.
The risk of scooping is often used as a counter argument for open science, especially open research data.
One recent testament to this argumentation is a New England Journal of Medicine editorial, in which a group of over 200 medical researchers presented their conditions for data sharing, including embargoes up to five years, fees for data reuse and processes for quality control.
All scooping isn’t illegitimate. Most often scooping is accidental. Science has trends and big questions that researchers flock to address. Scooping becomes misconduct only when idea or content used was taken from another researcher, without giving them credit. In the language of research integrity and research ethics illegitimate scooping is called misappropriation.
In the context of open science making the distinction between legitimate and illegitimate scooping boils down to credit: it’s okay to take inspiration from others and use someone else’s data published with an open license, as long as you cite the source.
I was interviewing researchers doing radically open science for my Phd, that deals with research integrity, and the fear of scooping came up. I started to wonder what made it possible for these individuals to do what they did despite the fear, seen as an obstacle to openness by so many of their peers.
In this case study I have looked into the openness strategies and practices of two open collaboration research projects created by Finnish researchers.
The Social Media for Citizen Participation, or SOMUS project, was created by a loose online collective called the Open Research Swarm. The Swarm operated on a Finnish microblogging service Jaiku, which has since ceased to exist. SOMUS combined methods from engineering sciences and social sciences, co-creating and testing applications and social media platforms with citizen stakeholders.
NMR Lipids project is an ongoing open scientific collaboration project to understand the atomistic resolution structures of lipid bilayers. The discussions happens on a Blogspot based blog, while manuscripts and data are developed on Github. ArXiv and Zenodo repositories are used for preprints.
The primary source of this study are the interviews of two key researchers from each project. My methodological toolbox includes influences from social science history and social psychology of science. For understanding and describing the projects I have used cultural-historical activity theory and it’s activity system model.
The openness strategies of the two projects could be described by the term “open by default”. There were no conscious measures taken to prohibit scooping. The SOMUS project proposal was drafted completely openly. Instead of scooping them competitors aiming for the same funding call ended up actively avoiding overlaps in their proposals, some even contributed to the SOMUS proposal. The tragedy of SOMUS was that sharing funding among the Open Swarm was difficult and created tension, catalysing the collective’s disintegration. Also, the carefree enthusiasm resulted in a lack of concern for data management. Today only a PDF report remains openly available online.
The NMRLP turned the tables and instead of worrying about scooping has made sure that they don’t themselves inadvertently scoop anyone. Participants are expected to credit ideas received even in informal discussions. Anyone who has commented on the blog is eligible for co-authorship. It remains for each participant to decide for themselves whether their conribution is enough. The interviewed researchers told that neither of the projects published articles have free-rider co-auhtors, something that is actually rare in their field.
I found out that instead of being a disincentive, the fear of scooping actually acted as an incentive for one of the projects. For the researcher in question it was a way of getting rid of a constant stress. When you work is published online from the get-go, it is easier to prove priority.
All of the interviewed researchers named the unfairness of the academic publishing model as a motivation for creating alternative ways of disseminating research. One of the projects also wanted to address the normalization of p-hacking style bad practice in their field.
Even though the interviewees showed a level of mistrust for the research community at large, they had a high level of trust in their immediate community.
They worried about advancing their careers and a precarious career stage, meaning lack of funding or permanent position was a key motivation for most, but they were not prepared to compromise their principles in pursuing success. All named scientific curiosity as a source of fullfilment and motivation. They were aware of their pioneer status and excited by it.
Making generalizations based on a case study is always a risky business, but with maybe a half a grain of salt there can be general lessons to be learned here.
The interviews show a link between understanding and recognising research integrity and research ethics issues and the willingness to share. Research integrity training for researchers is already a policy priority at least in Europe and this conclusion only adds to it’s importance.
Exploring beyond one’s immediate research field can foster new ideas, research methods and questions. Again, multidisciplinarity is a research policy staple, but more should be done to make it mainstream.
Data citation principles and practices need to be in place ASAP, so that researcher’s work doesn’t get scooped just because no-one knows how to give proper credit on data.
The open collaboration projects show a worrying lack of engagement from women. The level of female participation was in both cases below of what can be considered typical for the fields: about 25% for SOMUS and 1/30 for NMRLP. The gender-specific concerns in the way of sharing should be recognised and addressed.
The experiences of empowerment, excitement and curiosity that open sharing and collaboration can offer should be communicated more, with less focus on buzzwords, policies, requirements and demands.
This could be done through case examples and champions. But in order to have them, efforts in sharing should be rewarded. One of my four interviewees grew tired of the precariousness of a researchers life and is working as a college teacher, while another is contemplating leaving academia altogether because the numerous pats-on-the-back are yet to translate into project funding or a position.
So far, this has been a mentally tough year for me. I wrote a long post detailing all the difficulties, but then decided to erase it. The main reason why I’ve had a tough time, is that there have been too many opportunities, too many interesting directions to pursue, and I’ve ended up being snowed-under with work. And with me, too much work means that not much work gets done. But to complain about my own inability to satisfactorily organize my life, when I have the most freedom and security a person can have, would be juvenile and ungrateful. I am writing this post from a seaside café in Helsinki, while my daughter is playing with her father nearby. It breaks my heart to sit here in comfort and safety and think about my researcher colleagues in Turkey, not to mention academics from Syria and other conflict-ridden areas of the world.
I’m drafting a conference paper about the fear of being scooped due to sharing research outputs and have therefore been thinking a lot about the social side of open science. For me, adopting open by default attitude towards my work has been a no-brainer. I haven’t been communicating about it a lot lately, but it’s more due to humble progress than anything else. I personally feel zero fear, when it comes to opening my work. I’m not saying this to brag or blame, as I am very cognisant of the unique and privileged situation I’m in.
First of all I don’t consider the likelihood of being scooped very high: choice of research subject and the way I’ve formulated the research questions are the sum of years worth of personal experience and networking. In social science a research project can be as unique as a fingerprint: relationships with interviewees, understanding of the phenomena, familiarity with sources etc. cannot often be replicated (that makes the transparency of the process ever more crucial, but that’s another can of worms, not to be opened in this particular post). Secondly, if someone despite all tried to scoop me, it would be very easy to prove priority, both because of the one-of-a-kindness of the project and the public record on this blog and Zenodo.
Thirdly, and in my mind most importantly, I don’t care if I don’t make it as a professional researcher. If my refusal to pursue certain journals and career paths will result in a failure to get funding or positions, so be it. I’m a Finnish citizen. There’s been a lot of talk about the fall of the welfare state, but at least for now, it’s still very much a reality. I don’t have to worry about health insurance or pension. I’m covered just by existing. My daughter will get exactly the same good-quality education whether I’m unemployed or a professor at the University of Helsinki. There are additional personal reasons for my attitude: I’m not a competitive or career-driven person, and would be happy to be working on almost any kind of job, since I’m able to find intellectual fulfillment also outside of professional life. I consider myself a millionaire in terms of social capital, through networks of friends and family.
Because of this privileged position of mine, I don’t really see an alternative to open by default. Everything else would feel selfish and wasteful.
So, to pick myself up a little, celebrate all the good things I have, and to pay homage to struggling academics all over the world, I decided to write a CV of failures (inspired by Johannes Haushofer). It is a wonderful thing to be allowed to fail. I have probably forgotten many important failures, but I try to keep better record from now on, and make reporting failures more routine.
CV of (academy related) failures
Degree programs I didn’t get into
2014 & 2015
Paid position of doctoral student in the Doctoral Programme in Political, Societal and Regional Change, University of Helsinki Doctoral School in Humanities and Social Sciences
Research funding I didn’t get
Emil Aaltonen Foundation, Wihuri Foundation, Finnish Cultural Foundation, Kone Foundation
Emil Aaltonen Foundation, Wihuri Foundation, Finnish Cultural Foundation, Kone Foundation
Funding calls into which I put a significant effort as a consortia member and didn’t get funding
Horizon 2020 call: SEAC.2.2014 Responsible Research and Innovation in Higher Education Curricula
Tieteen tiedotus ry. funding call with a project proposal on a research pitch training tour & related events around Finland
“Crowdsourcing: engaging communities effectively in food and feed risk assessment” : OC/EFSA/AMU/2015/03
Considered applying for the European University Institute (EUI) PhD program, but finally decided against it partly due to laziness, partly because they require certificates that cost money. I could have afforded them, but I resent the general idea that I have to pay in order to apply for a position.
I didn’t apply for funding from the Emil Aaltonen and Wihuri Foundations this year. Just didn’t have the time and energy. And frankly it didn’t feel worth my while, since I suspect that once you have funding for dissertation from one Finnish foundation, the others won’t fund you.
I don’t have that many failures to report because of my junior status as a researcher, but also due to a maybe tad too slack attitude. I will do my best to keep on failing in order to make this CV more impressive.
As I knew I was going to attend the SciDataCon 2016 in my role as the secretary of the Finnish Committee for Research Data, I decided to submit an abstract for a conference paper for one of the sessions. To kill two birds with four layovers, if you will (flying from Helsinki to Denver and back on a budget is truly an endurance sport). The session that I felt most suitable in terms of my ongoing research was one titled Getting the incentives right: removing social, institutional and economic barriers to data sharing. I was happy to find out earlier this week that my abstact has been accepted. The reviewers provided some feedback, which I’m thankful for, and requested a few revisions.Below is the abstract (in its original, unrevised form) with reviewers comments at the bottom. Now all that is left for me to do (in addition to revising the abstact and making it camera ready) is the small task of writing the actual paper. And enduring the gruesome flight itinerary.
Afraid of Scooping? – Case Study on Perceived vs. Actualized Risks of Sharing Research Outputs
Fear of being scooped is among the most commonly voiced concerns by researchers in discussions concerning Open Science in general and Open Data in particular. Does practicing openness make researchers more likely targets of research misconduct? This abstract describes a study based on two cases of “ultra open” science collaboration, with the aim of comparing perceived risks of real time sharing a wide array of research outputs to those that have realized. The focus is on research integrity related concerns. Preliminary findings are promising: for example, an experiment in openly drafting a funding proposal resulted in other teams focusing their projects on different research themes to avoid direct competition.
The current volume of Open Science themed policy initiatives, discussions, events and community action is a clear indicator of the recognition of a need for wider access to scientific knowledge. Open Access to scientific publications is the strand of the movement with the widest acceptance and most success. Progress on other fronts of Open Science, f. e. Open Data and Open Methods (Open Source), has been significantly slower. There are several reasons for this, some due to institutional factors, funding mechanisms or the lack of established workflows. One major factor that stems from the grassroots of research, from individual attitudes and professional cultures adopted by researchers, is fear.
A study commissioned by the Knowledge Exchange network reviewed incentives and disincentives for data sharing. Fear experienced by researchers was at the top of the list of barriers; “fear of competition, of being scooped and therefore reduced publication opportunities.” According to the study, these fears plagued especially early career researchers. They were both afraid of being ridiculed for their immatureness and wary of losing badly needed publications to scooping. When moving up the academic ladder, the possibility of getting laughed at faded in the researchers’ minds, while the threat of scooping persisted. (Van den Eynden & Bishop 2014)
Getting scooped means in the academic world that a researcher (or a team of researchers) beats another to the punch in publishing a research finding. This happens often and only becomes a research integrity offence if one of the researchers/teams got the research idea from the other. Fear of scooping in the context of Open Science can be thus interpreted as fear of becoming a victim of research misconduct.
How well founded is the fear of being unethically scooped due to sharing? What about being ridiculed? Belittling someone’s professional efforts is of course no research fraud, but it also doesn’t exactly make one a shining beacon of integrity either. There is very little evidence on the occurrence of research misconduct, not to mention the effects on the victims. Because of the rarity of sharing research outputs beyond publications, our understanding of the social implications of practicing Open Science are also scarce. As a counterbalance to the fear of scooping there are hopes of transparency through Open Science acting as a cure to at least certain forms of irresponsible research behavior. To encourage more researchers into sharing their research outputs as widely as we need to understand the situation better.
For this case study I have interviewed key researchers from two “ultra open” research collaboration projects. In addition to Open Data, the projects have produced all of their content openly online, inviting and welcoming outside participation. One of the projects even allocated funding to a “research swarm”, an open membership online community operating on a microblogging platform. These two individual cases offer us a glimpse to the challenges and possibilities of research integrity in an Open Science era, as well as the social implications of sharing. The case study is part of an ongoing PhD research project on research integrity regulation in Finland.
Open Collaboration Cases
The two open collaboration cases that form the basis of this study are the NMR Lipids Project (NMRLP) and the Social Media for Citizens and Public Sector Collaboration (SOMUS) project.
The projects were chosen based on the approach of extreme openness towards collaboration and co-authorship. Despite their somewhat radical nature, both of the projects have produced traditional research articles, but for example in the case of NMRLP, the authorship of these publications has been based on self-assessment by the contributors (contributors meaning anyone who has commented on the project blog). Another factor was that the two projects have been coordinated by Finland trained researchers. This is due to the case study being a part of a PhD project focusing on the national Finnish research integrity regulation. This caused for example the Polymath Project to be excluded from the cases.
The NMRLP belongs to the field of molecular physics. It is ongoing at the time of writing, with all of the research outputs available online either in the project blog or GitHub service. The SOMUS ran for two years during 2009-10, although the open online community, Open Research Swarm, that gave birth to the project, predates SOMUS by at least a year. SOMUS was a project in the field of multidisciplinary media studies and all of its research outputs were openly published online during the running of the project. Unfortunately, the outputs were not placed in an open repository post-project and are currently accessible only by request. This is an issue that I will also address in the finished paper.
The preliminary sources for this study are thematic interviews of five open coordinating researchers from the chosen projects, together with the open online content of the NMRLP and the archives of the SOMUS.
This research is multidisciplinary both in terms of methods and theoretical framework, drawing inspiration from behavioral sciences and social sciences, especially sociology of science & technology, with strong roots in social science history, itself very much multidisciplinary in nature. Oral history has been an important point of reference in conducting the interviews.
The goal is to make the collected interview data available for further research use with an open license and through an open online repository, both in audio and textual formats. This is not common practice in qualitative humanist and social scientific research. Sharing qualitative human data is ethically complicated, but I argue that the obstacles are being somewhat exaggerated and the benefits not discussed enough. There is evidence that research subjects can be quite willing to allow re-use of their data, even if the data in question is sensitive (Borg & Kuula 2007), which is not the case for this study. Currently the biggest obstacle for sharing identifiable interview data in Finland is a data privacy ombudsman interpretation a so called broad consent is in breach of law regulating data privacy.
The case study is still ongoing at the time of writing. More definitive results will be available at the time of SciDataCon 2016. Nevertheless, there are some interesting preliminary observations and findings to be shared. Markus Miettinen, one of the key researchers of the NMRLP, compared in a workshop presentation his initial fears when initiating the project with what actually happened. He listed eight risk scenarios, ranging from lack of participants to scooping and personal conflicts. According to him (Laine et al. 2015)
[…] the experiences gained on open research during the NMRlipids project have been extremely positive, and none of the major fears we had before starting the project have actualized. Quite the contrary, the open research approach has proven to be an extremely fruitful as well as rewarding way to do research.
The SOMUS has gathered similarly promising experiences. Opening up research funding proposals, possibly one of the most radical and controversial ideas developed under the Open Science movement umbrella, gets back up in the SOMUS project report (Kronkvist 2011):
Could research funding generally be applied to ‘open calls’? An often-used reason for a closed doors policy is that of competition. Research ideas can be stolen, and project applicants engage in tough battles for funding. It became clear through discussions with other applicants, however, that the open draft actually helped them focus their projects on different research themes to avoid direct competition with Somus. When a text is documented in a wiki, it is easy to find an author and time stamp for the text, making it uncomplicated to solve authorship questions.
To what extent these and other examples drawn from the cases can be generalized and translated to other research fields and environments, will be discussed in the final paper.
The author would like to thank Tiina and Antti Herlin Foundation for their continuous financial support to her PhD research, which this study is one part of.
The author declares that she has no competing interests.
1 See for example the recent statement made by EU member statements about making all scientific articles freely accessible by 2020: http://english.eu2016.nl/documents/press-releases/2016/05/27/all-european-scientific-articles-to-be-freely-accessible-by-2020
2 Many research integrity guidelines limit the categories of research misconduct to falsification, fabrication and plagiarism (FFP), but for example the Responsible conduct of research and procedures for handling allegations of misconduct in Finland (2013) guideline by the Finnish Advisory board on Research Integrity names misappropriation as a form of research fraud.
Borg, S and Kuula, A 2007 Julkisrahoitteisen tutkimusdatan avoin saatavuus ja elinkaari – valmisteluraportti OECD:n datasuosituksen toimeenpanomahdollisuuksista Suomessa. Yhteiskuntatieteellisen tietoarkiston julkaisuja. Tampere, Finland: Yhteiskuntatieteellinen tietorkisto (Finnish Social Science Data Archive). pp. 37.
Kronkvist, J 2011 Somus as an attempt at a new paradigm. In: Näkki et al Social media for citizen participation – Report on the Somus project. VTT Publication 755. Espoo, Finland: VTT Technical Research Centre of Finland.
Laine, H, Lahti, L, Lehto, A, Miettinen, M and Ollila, S 2015 Beyond Open Access – The Changing Culture of Producing and Disseminating Scientific Knowledge. Proceedings of the 19th International Academic Mindtrek Conference. New York: ACM. pp. 202-205 DOI: http://dx.doi.org/10.1145/2818187.2818282
Van den Eynden, V and Bishop, L 2014 Incentives and motivations for sharing research data, a researcher’s perspective. Knowledge Exchange. Available at http://www.knowledge-exchange.info/event/sowing-the-seed [Last accessed 30 May 2016].
Comments and suggestions by the SciDataCon 2016reviewers
This is a very spontaneously written quick post inspired, or rather forced out, by the release of some 70 000 OKCupid users’ data (which I learned about yesterday). The data was released by a Danish self-acclaimed researcher and scientist, actually just a graduate student (I say ‘just’ not because I consider graduate students to be lesser human beings, but because he obviously was not aware nor acting according to the professional codes of the research community, i.e. wasn’t a fully trained scientist). The data has since been removed from the Open Science Framework repository.
What I’m about to say next is a general remark on the current culture among the scientific community, rather than an analysis of the individual case that is the OKCupid-gate.
In a way this feels like an accident waiting to happen. The discussions concerning research integrity and ethics have been lagging far behind the progresses of Big Data, Science 2.0 and Open Science. Both Science 2.0 and Open Science have so far mostly been playgrounds of natural scientists. Yes, there are the emerging fields of digital humanities and computational social sciences, but despite the buzz they remain marginal. Most of the human scientists applying computational methods and digital sources to human science research questions are having to go pretty DIY on their workflows, both in terms of practical and theoretical methods. It is not my intention to put natural and human scientists up against each other saying that one is more ethically responsible than the other. What I am saying is that human scientists have a different, and research-wise deeper, understanding of all things human and social. It’s their job, after all. They are better equipped to understand the 50 shades of open in social media and see the potential harm that personal information “that is already openly available” can do if it’s released as open data.
All these discussions, about computational methods, Open Science, Science 2.0, Web 2.0, research integrity, natural sciences, human sciences etc., are going on in their separate bubbles. It is a terribly slow and wasteful way to proceed. I say it’s about time to start bursting these bubbles. First of all, we need to stop referring only to natural sciences as “Science” (yes, Anglo-Saxon world, I’m looking at you) and make the concept also include human sciences. This would help us to acknowledge that there are certain skills and lessons that every researcher, or scientist, no matter whether they are studying Big Bang the historical event or Big Bang Theory the sitcom, needs to learn. Human scientists have to start acquiring basic computational and research data science skills, and natural scientists need to better understand how their work relates to societal issues.
We should also break the Open Science bubble and make openness (as in accessibility and transparency) prerequisite for good science. This would maybe finally rid us of the weird idea some people seem to have (among them both advocates and opponents of Open Science) about openness being equal to vomiting content to the web, without giving a second thought to issues such as quality (metadata, licensing) or privacy (the OKCupid case).
The ongoing discussion in Finland around the suspected research Misconduct at the VTT Technical Research Centre of Finland, which has gained also the attention of the Retraction Watch blog, reminded me to finally publish the transcript of my presentation at the 2015 Academic Mindtrek conference. The presentation deals with the links between openness and integrity in research. I give an estimate of the amount of research fraud in Finland and conclude with a little exercise called ‘How would I cheat?’ (to know the answer you have to scroll all the way down). The presentation was part of a workshop called ‘Beyond Open Access – The changing culture of producing and disseminating scientific knowledge’, that I organized in my role as the Open Knowledge Finland Open Science working group core person. The other presenters were Anne Lehto from the University of Tampere Library, Markus Miettinen from Freie Universität Berlin, Samuli Ollila from Aalto University and Leo Lahti from University of Turku (currently working at KU Leuven). Ironically enough the extended abstract was published as toll access, but here is the PDF. You can also check the recorded presentations here (mine is awful, I’ve gotten a lot more confident since).
In this presentation I will talk about bad scientific behaviour enforced and made possible by the current paradigm of “closed science”, and the solutions that open science could offer.
My academic background is in social science history. I have one foot in research administration and science policy and another one in research. After (and before) finishing my masters degree in Economic and Social history at University of Helsinki I worked in organizations such as Council of Finnish Academies, Finnish Advisory Board on Research Integrity and IT Center for Science CSC. The first one introduced me through learned societies to the concept of research community, the second to responsible conduct of research and the last to the idea of open science. Since receiving a grant from the Tiina and Antti Herlin Foundation earlier this year I have been able to combine these three things into a research subject. My subject is closely related to the theme of this presentation, but I will get back to it little bit later. In addition to being a doctoral student I am also an active member of Open Knowledge Foundation Finland as the core person of the Open Science working group.
Now that you know the context, we can move forward by defining the key concepts (social scientists love to talk about concepts). What do I mean by open science, research integrity and responsible conduct for research? Let’s take the exercise even further: what do I mean by open and by science? I like clear concepts, but at the same time I like to be quite liberal with them. First of all I understand science in the same sense as the Finnish word “tiede”, German “Wissenschaft” and French “Science”; as something that encompasses all academic research, not just natural sciences. The meaning of openness, at least in the context of open science, to me goes deeper than just public, or free of charge. I understand open science as something that doesn’t concern only researchers, but the society as a whole. I see all of these concepts having some kind of link to open science or discussions about it. To me the long term goal should actually be making the concept open science obsolete. All good science should be open, meaning that research methods and chains of reasoning should be transparent, the data open for re-use, replication and scrutiny and research results available to anyone interested, in a format that is accessible and language that is understandable (by which I mean no jargon, and yes, this applies to all fields from political science to cosmology. Every research has societal relevance, it just needs to be recognised and communicated, for example in a summary).
What is research integrity
An individual is said to possess the virtue of integrity if the individual’s actions are based upon an internally consistent framework of principles. The substance of research integrity are the commonly accepted professional principles that all fields of research share. This set of principles is referred to as responsible conduct of research (RCR for short). Research integrity and RCR should not be confused with the many field specific ethical codes of conduct that regulate for example the medical sciences.
So RCR is the lowest common denominator for good scientific research. The flip side of RCR is research misconduct and research fraud. Steering clear should be pretty basic: don’t claim someone else’s text as your own, don’t tamper your data in order to get more dramatic results, don’t invent your own data. This is the unholy trinity of research misconduct: fabrication, falsification and plagiarism. The Finnish RCR guideline also adds misappropriation, so at least while you’re in Finland, you should also refrain from stealing other people’s ideas (it’s not illegal though, ideas don’t have copyright). RCR principles are most often understood as not taking stands on questions of excellence. You can be the most virtuous researcher and still produce boondoggle research, just like doing bad quality research, for example using too small sample size or jumping into conclusion (like assuming a correlation between high levels of ice cream consumption and cases of drowning, etc.), is not considered misconduct.
But irresponsible practices don’t stop at the FFP (+M, the Finnish addition). There is a vast “grey area” between clear cut misconduct and recommendable behaviour. It is where things get tricky. Is it bad practice to name a more distinguished researcher as a co-author, when in fact they had very little to do with producing the article? Is it wrong to tell in a conference poster about results that you haven’t quite verified yet, but are 99,9% sure to get to? Is it really so terrible, if in your list of publications you move your name first in a list of authors, just for this one article? Surely there’s no harm in translating “docent” as “assistant professor” in your English CV? Adjunct professor, assistant professor, who’s counting? The grey area is full of practices that are widely spread but problematic. Some of them have even strong arguments for them, like adding a professors name to a students paper. They both win, one gets more attention, the other stays relevant, while having had to swap research for red tape.
How big is the misconduct problem
I will read you a quote from the Royal Dutch Academy’s report on responsible research data management, that reflects very well my understanding of the situation:
“Very little if anything is known about the frequency of violations against scientific integrity. Only a very limited amount of research has been carried out on this phenomenon. Estimates vary from “never” to claims that for every case of research fraud brought to light, there are approximately 100,000 undiscovered violations, both major and minor. With estimates and claims veering wildly from one extreme to the other, we can only conclude that we simply do not know how big the problem of scientific misconduct actually is. The estimates and claims are no less extreme in the Netherlands. Much depends on how we define research fraud. Do we mean only the most serious cases of FFP, or are we also referring to minor instances of improper behaviour in everyday research practice? Those who say that the incidence of fraud is negligible are thinking of rare cases of falsification; those who claim that fraud is widespread are thinking of everyday behaviour. As long as there is no proper, evidence-based research on fraud, all claims are mere speculation.”
To get at least a rough idea let’s look at some figures we have available from Finland. In Finland there is in place a unified process of handling misgivings of research misconduct. By undersigning the RCR guideline document a research institute promises to deal with all suspected cases of bad behaviour inside it’s walls according to this process. One of the demands made in the guideline is that Finnish Advisory Board on Research Integrity needs to be informed whenever a research misconduct investigation is taking place. Since the list of undersigned organizations covers practically the entire Finnish research community, the boards archives should in theory hold information on all suspected and confirmed misconduct at least since 1998.
A survey conducted by the board in 2003 indicated that the undersigned organizations were in fact quite trustworthy in reporting misconduct to the board. The survey concerned the years 1998-2002, during which time the board had received information on nine confirmed cases of research fraud. According to the survey the correct number of occurrences was eleven.
If the figure gotten from the survey holds true, during the years in questions there was on average less than three research fraud cases in Finland. No surveys have been conducted since 2003, but according to the board’s annual reports the order of magnitude has remained stable. For example based on the 2012 annual report the number of frauds for 2012 was five, for 2011 three and for 2010 two (even though numbers were on the rise during the years mentioned, the small overall amount doesn’t allow conclusions without further analysis on the individual cases).
The figures seem very low even considering the relatively small size of the Finnish research community. According to a meta-analysis of 18 surveys (by Daniele Fanelli) asking researchers about their working practices, up to 2% admitted having fabricated, falsified or modified data or results at least once. The article where the result was published states that “[c]onsidering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct.” Let’s assume that there are 7000 academic researchers in Finland. That’s the number of members of The Finnish Union of University Researchers and Teachers. 2% of 7000 is 140. Continuing this exercise based on the figures mentioned earlier let’s estimate that the average number of confirmed misconduct cases brought to the board’s attention would be five per year, (there are no long-term statistics). This brings us to an estimate of 115 cases during 1992-2013. These two roughly estimated figures (140 corrupt researchers and 115 guilty-verdicts) are at least in the same ball-park. But when one considers that the actual number of people doing research in institutions bound by the RCR guideline is probably much higher than 7000, that the 2 % from the survey is a conservative estimate and that the amount of caught fraudsters between 1992 and today in Finland is in reality well below 115, it is impossible not to conclude that a lot of misconduct goes unnoticed.
This in itself is not shocking, since no regulatory system ever was airtight. But for the credibility of Finnish research it would be important to understand more about that inevitable gap between what comes to light and what doesn’t. Some critics have hinted that self regulation is just a way for the scientific community to sweep problems under the rug.
One solution to the challenge of misconduct is to tackle the phenomena at the root, instead of chasing wrongdoers. The question is why do researchers cheat?
Distorted incentives created by the journal article
What has the article to do with bad behaviour and grey areas? The metric systems built around articles creates distorted incentives for science, making misconduct more appealing. Everyone working in or around academic research knows the expression “publish or perish”. For the most ambitious it is not enough to just publish, they want to publish in the most prestigious journals, with the highest impact factors. In order to get your article to the likes of Science and Nature, you need to have sensational results. Like finding out that disorderly environments promote stereotypes and discrimination, or that openly gay canvassers could shift voter’s view towards supporting same-sex marriage, or that mice cells could be “reprogrammed” by soaking them in mildly acidic liquid. As many of you probably recognised, these examples are drawn from some of the most scandalous cases of research fraud during the past few years. The papers in question were published in Science and Nature.
I’m not particularly trying to point a finger at these two journals, but it doesn’t seem to be entirely coincidental, that these famous fraudsters have emerged from their pages. Nobel laureate Randy Schekman called in 2013 out a boycott against big prestigious journals, of which he named especially Science, Nature and Cell, calling them “luxury-journals”. He accused them on focusing on topics that are sexy and will likely make waves at the expense of research integrity and scientific quality. He also lashed out against the impact factor, calling it a “toxic influence” and saying that “A paper can become highly cited because it is good science – or because it is eye-catching, provocative, or wrong.” Schekmans antidote was open access, especially the journal eLife, editor of which he himself happened to be.
I am all for open access, as long as it’s not of the hybrid type, but I don’t agree with Schekman in that open access would solve the problem of distorted incentives. Open Access journals are in many ways bound by the same mechanisms as traditional journals. If Science, Nature and Cell seized to exist today, some other journals would most likely take their places. If all journals were to turn their business models into open access over night, I don’t think that would eradicate research misconduct. After all, the publish or perish mentality is linked to the article based metrics, not the business models of journals.
Why did Diederik Stapel cheat? He is the man behind the research that showed a correlation between messy environment and tendency towards discrimination. In an interview to the New York Times he described the motives behind fabricating research results as aesthetic: “It was a quest for aesthetics, for beauty — instead of the truth,” he said. According to the story he described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high. In other words, he committed fraud because he could, because it paid off and he didn’t get caught. For the motives of the other two researchers, Michael LaCour of the gay canvasser fame and Haruko Obokata who was behind the STAP cell controversy, I can only speculate, since they haven’t done any tell-all interviews yet, but both of them had very promising, almost shooting-star like careers before getting caught on misconduct. When it comes to their so called crimes, it looks like LaCour followed Stapels footsteps by making up his own data, while Obokata tampered with her specimen, thus creating desired result.
As a fraudster Stapel is in a league of his own. With 58 retracted articles he has earned fourth spot on the Retraction Watch blogs leaderboard. He is also, at least in part, to blame for the so called replication crisis in social psychology, to which Michael LaCour only added fuel. An entity called Centre for Open Science has quite smartly turned the crisis into a research project, called the Reproducibility Project: Psychology. They tried to replicate the experiments of 100 published studies. Unfortunately the article containing the results appeared in Science and is thus behind a paywall.
The Center has also taken part in bringing forth a set of recommendations called the Transparency and Openness Guidelines (TOP for short). As you might have already guessed, they aren’t really about open science or open data. Where data sharing is concerned the TOP is about providing research data for replication attempts after the fact, after the research has been published, and only for replication purposes. To me this feels like a very limited solution.
I much prefer the response that Stapels countrymen had at the Royal Dutch Academy (KNAW). They drew the conclusion that there was probably something lacking in the data management practices, if a fabrication of such a scale was possible without anyone noticing. The Academy conducted a survey among Dutch researchers and found out that the data management indeed often left room for improvement. It was the usual story, data stored on personal computers etc. The report concluded that “Maximum access to data supports pre-eminently scientific methods in which researchers check one another’s findings and build critically on one another’s work. In recent years, advances in information and communication technology (ICT) have been a major contributing factor in the free movement of data and results.” The report comes very close to recommending open data policies, but doesn’t quite get there. The year was 2013 and open science has taken big leaps since than. Perhaps if the report was written today the recommendations might have been more radical.
The report also examined the codes and guidelines in place, in case they were to blame for sloppy data management and needed tightening. The conclusion shows common sense in recommending that instead of setting up new regulations, researchers should be made more aware of the existing ones.
Openness in RCR guidelines
And boy there sure are codes of conduct to be aware of. The journal Lancet reported in 2013 about 49 sets of ethical guidelines for research in place in 19 European countries. To be fair all of these are not about RCR, but field specific ethical codes. Still, one researcher gets to deal with quite a few guidelines, at least in theory. Let’s do a mini survey. How many of the Finnish researchers in the room have read the document “Responsible conduct of Research and procedures for handling allegations of Misconduct in Finland”? How many of you have at least heard about the European Code of Conduct for Research Integrity? How about the Singapore Statement? It is clear that problems with research integrity will not go away by writing these type of texts. It is equally obvious that there is a communication deficit here, but that doesn’t mean that it isn’t important to define and put in writing common values for science, even if it’s symbolic. The principles written in the guidelines are the ones that good quality research has lived by for decades. The guidelines merely reflect the values of the research community, not install them. Like openness. Open science is sometimes presented as something new, but when you read a few of the guidelines, you see that it is already there, at the core of good science. Let’s take a quick look at the three earlier mentioned codes.
None of the three documents directly refer to the concept open science, which doesn’t mean that they are anti-open, just that the term is a relatively recent invention.
The Singapore Statement is the most conservative of the three. It demands data sharing, sort of: “5. Research Findings: Researchers should share data and findings openly and promptly, as soon as they have had an opportunity to establish priority and ownership claims.” So in principle data should be shared, but the mention of establishing priority and ownership claims give a way out to those not so keen on sharing.
The European Code of Conduct on Research Integrity uses stronger terms when speaking about openness and data sharing, which makes sense, since it is meant for a narrower audience than the Singapore Statement and therefore the text doesn’t need to please all and everyone. Also the European Commission was quite positive about open access already at that time (five years ago), for example implementing an open access pilot and funding OpenAIRE in FP7, thus encouraging positive stands towards openness in Europe. The European Code mentions openness and accessibility as one of the principles of integrity in scientific and scholarly research. The text goes on to state that “Objectivity requires facts capable of proof, and transparency in the handling of data. Researchers should be independent and impartial and communication with other researchers and with the public should be open and honest.”
The Code encourages data sharing:
- Data: All primary and secondary data should be stored in secure and accessible form, documented and archived for a substantial period. It should be placed at the disposal of colleagues. The freedom of researchers to work with and talk to others should be guaranteed.
The above mentioned point is made in a portion of the text that lists things that should be taken into consideration when drafting national guidelines, since, according to the document, some issues may be subject to cultural differences and cannot therefore be incorporated into a universal code of conduct. In other words it’s an additional suggestion, not part of the code’s core.
Where do the Finns stand on all things open and data? Here: “2. The methods applied for data acquisition as well as for research and evaluation, conform to scientific criteria and are ethically sustainable. When publishing the research results, the results are communicated in an open and responsible fashion that is intrinsic to the dissemination of scientific knowledge.”, and here: ”4. The researcher complies with the standards set for scientific knowledge in planning and conducting the research, in reporting the research results and in recording the data obtained during the research.” In addition there is the following mention under the headline “Disregard for the responsible conduct of research”: “inadequate record-keeping and storage of results and research data”.
How would I cheat
In the beginning of this presentations I promised to get back to my own research. In the workshop description I also stated that the workshop would be about practical examples. So I decided to combine the two and conclude with a little game called “how would I cheat?”.
The center around which my doctoral research evolves, is the Finnish definition of responsible conduct of research. My research questions focus on delving it’s past, present and future. I approach the Finnish RCR guideline from three different perspectives: 1) the defining and negotiating of the content, 2) the practical application of the values and the handling process described in the guideline and 3) the standing against changing trends of research practices.
In plain language falsification means doctoring data and / or results. One of my aims is to produce statistics concerning allegations of misconduct and cases of identified misconduct in Finland during 1998-2012. As I mentioned earlier the Finnish Advisory Board on Research Integrity’s archive should hold information on all such cases in Finland. That is most probably not the case, since the guidelines have been enforced in different institutions to varying degrees. I could tweak the data to lean this way or that way, f. e. to show that certain disciplines have produced more investigations than others (which is likely, my hypothesis is that research fields have different cultures when it comes to handling misconduct, meaning that there could be departments that are more likely to report things to higher levels). I could do this in order to create more dramatic results and gain more attention for my work, or to prove an idea that I in my gut KNOW to be true, but that the damn data will not support.
Chances for getting caught for this one aren’t too bad, because the records are mainly on paper, residing in an uninviting bunker archive. But the number of misconduct investigations in Finland is so low, we are talking most likely about tens, not hundreds of cases, that dramatic results would raise questions, or at least enough interest for other people to go digging in the archive themselves in order to find out more detailed information. Which they of course then wouldn’t find.
Fabrication means inventing things out of thin air. I’ve been struggling to find an example of open research project from the humanities for my case study about the way in which RCR is put into practice in open and collaborative research projects. Maybe I should fabricate one! It would be a lot of work, but doable.
First I would need to come up with a research question, invent participants and their backgrounds and then fabricate a blog detailing this made-up research. I could actually commit two frauds with one stone and plagiarize the content of the blog, copying and pasting from research blogs, articles, etc. For the discussion part I could copy actual discussions found online. When a text is online and machine readable, it is easy to detect fraud if looked into, but I would rely on no-one ever suspecting that something like comments on a blog could be stolen. I would have to be more careful with the actual blog posts. Older printed material (f. e. from the 90’s) would be ideal, which means I would need to transliterate a lot, but I think it would be worthwhile, since it would significantly lower the chances of someone detecting the fraud. A big part of the blog’s content would be nonsense, because making it coherent would (at least almost) make it a real research, and that would spoil the cheating, wouldn’t it.
The second phase would be inventing the interviews, i. e. the actual data of my research. I could invent all kinds of drama, but since my whole plan would be to not attract too much attention to the fabricated research behind the fabricated interviews, I would want to make it as boring as possible and paint the research as an uneventful boondoggle. The main participants, the one’s I would “interview”, would be made-up people from made-up universities. I could create false LinkedIn profiles (ResearchGate doesn’t accept an invented university, or do they?) with e-mail addresses directing incoming mail to me, just in case someone should start digging.
Likelihood of getting caught: very high. I think this plan has “Titanic” written all over it. When I think about the amount of work this would require… oh dear. Actually the laboriousness might heighten the chance of success a little: people would think that no-one in their right mind would go through this much trouble to achieve so little.
So now, after having prevented at least one case of research misconduct through openness, my own, I leave you with the following take-home messages:
Open science has the potential to reduce research misconduct through added transparency.
Open science is in line with the existing RCR principles.
Open science is responsible science.
I am currently working on a case study on responsible conduct of research in the context of open research collaboration. I have chosen three research projects that have been conducted online completely openly, with an open invitation for anyone to particpate. I won’t name the projects yet for the simple reason that I am intending to interview the key researchers of those three projects, but haven’t approached all of them yet, and I feel it would be a little bit tacky if they were to hear about my project indirectly. So I’ll get back to the projects as soon as I have made my plans known to the persons involved.
What I want to discuss here is the pratical aspects of forming informed consent for research subjects. As I am planning on contacting who are hopefully my interviewees-to-be I have been thinking a lot about the information that I owe to them about my project.
Informed consent is a central concept to the ethics of human research, i. e. research on human subjects. Here’s what my wise friend Wikipedia says about informed consent:
“An informed consent can be said to have been given based upon a clear appreciation and understanding of the facts, implications, and consequences of an action. To give informed consent, the individual concerned must have adequate reasoning faculties and be in possession of all relevant facts.”
The concept was first introduced in the domain of medical sciences, but is equally important to research in humanities and social and behavioural sciences, that can often involve minors or deal with sensitive issues, such as domestic abuse, sexual orientations or policital views, just to name a few examples.
My research subject, which deals with people in their professional roles, is not sensitive, but because of my commitment to the principles of openness, requires thorough ethical reflection. There is practically no precedence concerning open unanonymized qualitative interview data, at least that I know of. I will get back to the challenges I have experienced when trying to find a repository for archiving and sharing my data in another blogpost. But before I can archive, let alone share, any data, I need to be sure that my research subjects understand what they are getting involved in and agree to everything that I’m doing.
In order to inform my interviewees I have drafted a project descrption, which can be found as a Google document here. The document has been approved by the University of Helsinki Ethical Review Board in the Humanities and Social and Behavioural Sciences (my second supervisor is the chair of the board, but she recused herself from the decision making in my case). I have followed the Finnish Advisory Board on Research Integrity (FABRI) ethical principles in the humanities and social and behavioural sciences in putting together the document:
Assenting to be interviewed can be considered as consenting to the interview data being used for the purpose of the research project in question without any additional paperwork. Concerning the archiving and sharing of the data I have decided to ask for a written consent. The consent form I have formulated follows the example given by the Language Bank of Finland (I can’t find the model form anymore after they have updated the website, sorry for that), with some minor altercations and additions. At the moment Zenodo looks like the most likely repository for my data, but as I mentioned above, I will get back to this issue, since it has caused me a lot of headaches. Stay tuned.
Ever since I heard Erin McKiernan telling about her Open Pledge at the 2015 OpenCon I have been thinking about making my own. It seems only appropriate to publish it at the beginning of a new year. In addition to McKiernan I have been inspired for example by the Open Science Peer Review Oath and the responsible conduct of research guidelines I’m studying.
My research is open by default.*
I will not be open at the cost of my research subjects privacy; when in doubt I will refrain from publishing.
When I don’t share something I will explain the decision openly and honestly.
I will share both my successes and my failures.
I will blog my research.
I will communicate my research in a way that is understandable also to people outside the research community.
I will either publish in full open access journals or traditional journals that allow self-archiving.
I will not publish in so called hybrid open access journals.
I will not publish in journals owned by companies that exploit the research community.
I will be an advocate of open science and speak about my choices.
*See here what I mean by ‘open by default’.
This text is based on a short presentation I gave in a webinar organized by Open Access Academy, EURODOC and FOSTER today, titled Open Data – How, Why and Where? I was asked to speak about why early career researchers should care about open data.
Sometimes I wonder whether we are doing ourselves a disservice by talking about Open Access, Open Data, Open Science, Open Source, Open Notebook Science etc. All of these labels make openness seem like something new, something complicated and something that adds to the burden of researchers, when it is exactly the opposite. To me Open Science is just plain good science, both in terms of academic excellence and research integrity.
Open Access and Open Data are already reality. More and more funders expect that scientific publications done with their support are openly available for maximum impact. Policies are not yet as demanding for research data, but there is an increasing amount of incentives and pressure to make it more openly available too.
In light of this, you don’t need me to tell you that you should care about open data. What I can tell you instead is why you should embrace and practice open data, even go beyond it.
What I’m trying to do with my dissertation is to go beyond open access and open data and make open the default setting of my work. It makes life so much easier compared to the other option.
When I got the research grant I’m currently working on I wanted to put into practice the things I had been trying to advance in my previous job as the coordinator at the Open Science and Research Initiative by the Finnish Ministry of Education.
I tried to dissect my research to make it fit into boxes labeled Open Access, Open Data, Open Methods and so on. It felt forced. None of the policies seemed to apply to my particular case. I didn’t even understand what the word data meant for me: I use non-digital archival sources (that means paper), published books (more paper), statements and guidelines and also collect interview and survey data.
When I asked myself “What can I publish as Open Data?” I hit a wall: nothing.
Nothing that I produce or can produce fill out the requirements of for example the Open Knowledge Foundation’s definition of open data. But instead of giving up and just drawing the conclusion that qualitative social scientific research will forever be a bystander in the Open Science discussions I wanted to find a different approach.
So I rephrased the question asking “What can I NOT publish openly?”. This way I don’t have to justify openness and worry about doing it right according to this or that policy. This new attitude has removed the fear of not fitting in and given me a clear direction. I can concentrate on doing the most interesting research I can, as transparently I can, on my own terms.
There is of course a lot of work to be done and things to figure out, like ethical issues, tools, platforms, formats, metadata, you name it. Fortunately there is a great community out there, ready to give support.
I want to stress that being open by default doesn’t mean being careless. I can do trial and error with my own life but not with the lives of my research subjects. That’s why I will keep on asking the question until my research is finished and keep on getting different answers in different phases of the process. I have put a lot of effort into planning my work and will continue to do so. Whenever I am in doubt about publishing something I’ll take a time-out and proceed only when I can be sure of not harming anyone.
To summarize why you should make openness your default setting:
don’t just passively wait what kinds of demands the funders and other makers of research policy will impose on us next, be pro-active and create your own practices
be a member of the open science community
do good science that has real impact
advance your career: more and more recruiters and funders take into account published data, you can also gain traditional merit via citations that your open access articles and published data sets generate
stop being afraid of failure: you’d be surprised how many people are interested in your non-significant results, reducing publication bias is also an ethical issue
you might not be in the academia for ever, so build broad expertise and a personal brand