[If you arrived here using the short link bit.ly/fear_of_scooping, you might be interested in reading about my first article, “Afraid of Scooping – Case Study on Researcher Strategies against Fear of Scooping in the Context of Open Science“. It was first presented at SciDataCon2016 as a conference paper and then submitted for Data Science Journal as a research paper. It was accepted there with minor revision and is currently in the final stages of editing. I have publised a preprint manuscript in Zenodo.] 

The abstract for this poster can be seen in my Zenodo repository. It was first submitted to World Conference on Research Integrity 2017 as paper proposal (this too will someday be noted in the updated version of my CV of failures).

In the poster I claim that the social conscience of academic research, at least in Finland and very likely in many other countries, is fragmented and outsourced. Fragmented because discussion, development and implementation of issues concerning research integrity, research ethics and open science have been siloed to separate containers, based on arbitrary organizational boundaries and in a way that doesn’t encourage inter-silo dialogue. Outsourced, because the bodies regulating research integrity and research ethics and promoting open science are governmental and the people institutional elite of near mandarin status.

For many historical reasons the Finnish learned societies, the four academies that would be the natural outlets for multidisciplinary opinion and evidence based advice, have been for a long time very passive in all kinds of matters, but especially national research policy. The research behind the poster is still ongoing, but so far it seems that the academies had virtually no part to play in the formation of the Finnish research ethical landscape.

When the Finnish Advisory Board on Research Integrity (TENK, which was for the first twenty years called Board on Research Ethics) was founded in the early 1990’s , the initiative didn’t come from the research community, but the national parliament and government instead. The Finnish equivalent to a national research council, the Academy of Finland was the first to take on the task of advising and guiding the community in matters of research ethics, to be soon replaced by the Ministry of Education and it’s specially appointed board.

In the beginning the motivation for ethical guidance and oversight was tackling research misconduct and having a watchful eye on the development of genetic research. Quite soon the previous motivation shadowed the latter in the work of TENK, as the tasks related to medical ethics, gene technology and animal testing were taken over by ministries that dealt with the legislation in those domains. These emerging bodies didn’t concentrate on just the challenges of scientific research, but dealt also with industry concerns, for example. Each body got a small secretariat, sometimes less than a whole person year per year, and an expert board of their own.

Fortunately there are currently international efforts to bridge these gaps of yesteryear, such as the ENERI project, that tries to bring European research ethics committees and research integrity offices closer together. Also the secretariats of the Finnish boards have done their best with their scarce resources and held semi-regular meetings together.

Unfortunately mistakes have a tendency to be repeated. Open science, the pursuit of making academic research more accessible, transparent, efficient and influential, is in Finland being driven forward by none other than once again the government, through a Ministry of Education and Culture Open Science & Research initiative. The universities and other research institutions are getting more an more on board on the top governing level, but once more, the community lags behind. I have personally had the questionable honor of speaking about the benefits of open research in an official University of Helsinki doctoral school on human sciences event to a crowd of less than ten people, in a hall that could have fit a hundred (another one for the CV of failures).

The ALLEA Code of Conduct on Research Integrity is another disappointing example. First of all it was apparently updated because the EU Commission asked it to be. When discussing dissemination, the code focuses only on formal print era publications, leaving out dissemination throughout the research life cycle, new digital tools and media, popular science and citizen engagement altogether. When I asked about this oversight from someone who has been involved in the writing process, they said that the code was written in a way that reflects the status quo of the research community’s understanding of what is responsible practice and what isn’t.

Sometimes the lack of initiative and enthusiasm from the research community in the face of societal responsibility of research makes me want to scream. I understand that people need to put bread on the table and that requires articles, which require lots and lots of very hard work. But is the load of publish or perish really that burdensome, that it doesn’t leave room for anything else (in the professional capacity)? Or could the outsourcing of social responsibility have created a vicious circle: out of plain sight, out of mind? That’s something to reflect upon in my next print era high impact journal article.



Quick thoughts on the new European Code of Conduct for Research Integrity

ALLEA, the joint organisation for all European academies (also beyond the EU), has published a new version of the European Code of Conduct for Research Integrity. The previous one came out in 2011 and was co-written by ALLEA and the European Science Foundation (ESF), which has since gone through a major organisational transformation and a change of focus.

I am about to attend a seminar in Finland about the new code (starting in an hour!) and wanted quickly leaf it through, so that I get most out of the discussion. So what I’m about to say are super quick impressions on the document. Should I come to reconsider something after a more thorough reading, I will make sure to post it here.

First of all I am really pleased that data is acknowledged as a research output and that citing data is mentioned as a responsible practice:

• Researchers, research institutions and organisations acknowledge data as legitimate and citable products of research.

So yay for that. Unfortunately that was pretty much the only positive surprise. Beyond the parts on data management the document is very much same old, same old. Which is mostly fine in the sense, that I don’t disagree with anything (or maybe one thing, but I’ll get back to that). What is said is not so much the problem as what is left unsaid or unaddressed.

Citizen science, for one. I see that the European Citizen Science Association is one of the recognised stakeholder parties, good thing. But the traditional dichotomy between a research subject and object is in no way challenged. It is obvious that the researcher assumed in the code is someone with academic training and affiliation in an institution, such as a university or a research institute. There is no mention of co-design or anything like that. Either you are a researcher (or what is called a partner, but I see that hardly applying to individual citizens or f.e. informal networks of citizens) or you are a subject (more like an object in this context). Also the relationship between research and society, and the responsibility that the research community holds towards the society that funds it, is hardly touched upon.

There is of course this:

• Researchers publish results and interpretations of research in an open, honest, transparent and accurate manner, and respect confidentiality of data or findings when legitimately required to do so.

But without defining what is meant by open, this is very weak. There is nothing about translating results to the general public, engaging relevant stakeholders into discussion or trying to make sure that the research has impact beyond the impact factor. I think choices between publishers should be seen as an integrity issue: do you choose to disregard the unsustainable and even straight dishonest practices of some publishers in your quest for publications, or do you choose to go to one of the responsible and open ones. This could apply both to predatory publishers and say, corporations bleeding research communities and hogging ownership of publicly funded research outputs while making bigger profits than the oil industry.

Which brings me to the chapter that frustrates me the most, 2.7. Publication and Dissemination. The perversity of the current academic publishing model coupled with the process of gaining merit is the main threat to research integrity: how journal article is in many, if not in most fields the only way to have research results acknowledged, creating incentives for authorship-fraud, data secrecy and p-hacking, among other things. These are all practices that the code aims at preventing! And still journal article is presented as the default way of disseminating research – without naming it, mind you – with no mention, let alone encouragement, to exploring the plethora of other means provided by the digital environment. I guess it could be argued that since the article is not mentioned as the main type of publication, the code can be applied to all kinds of research outputs in the public space. But just trying to fit this supposed one-size-fits-all T-shirt on data authorship shows that it is just too tight.

To summarize: on the one hand the code is too general, like when it doesn’t acknowledge the paradigm shift of citizen science (BTW, I see that as the real paradigm shift, not open science, as you sometimes hear stated), on the other it’s too specific, like when it addresses article authorship issues, but leaves out data authorship, open collaboration articles, websites, games, you name it.

I would have liked to see bolder and more forward looking stances. Now the document is like a guest late at the party, trying to jump in a conversation that is already moving on to other topics. The code is, rightfully, described as  “a living document that is updated regularly and that allows for local or national differences in its implementation.” The next update cannot come too soon. Hopefully that one will not be outdated since publication.

AFRAID OF SCOOPING? Case Study on Perceived vs. Actualized Risks of Sharing Research Outputs

This post is based on my conference paper presentation at the SciDataCon 2016. The paper was part of a session called “Getting the incentives right: removing social, institutional and economic barriers to data sharing“. My slides can be seen here bit.ly/fearofscooping. The abstract for the paper has been published on this blog here. I will be submitting a full paper for the Data Science Journal special SciDataCon 2016 collection. A preprint will be available on Zenodo. [Edit: link to preprint added.]

Getting scooped, having your research idea or results published by someone else, is a common fear among researchers. It can be a major stress factor and an energy drain.

The risk of scooping is often used as a counter argument for open science, especially open research data.

One recent testament to this argumentation is a New England Journal of Medicine editorial, in which a group of over 200 medical researchers presented their conditions for data sharing, including embargoes up to five years, fees for data reuse and processes for quality control.

All scooping isn’t illegitimate. Most often scooping is accidental. Science has trends and big questions that researchers flock to address. Scooping becomes misconduct only when idea or content used was taken from another researcher, without giving them credit. In the language of research integrity and research ethics illegitimate scooping is called misappropriation.

In the context of open science making the distinction between legitimate and illegitimate scooping boils down to credit: it’s okay to take inspiration from others and use someone else’s data published with an open license, as long as you cite the source.

I was interviewing researchers doing radically open science for my Phd, that deals with research integrity, and the fear of scooping came up. I started to wonder what made it possible for these individuals to do what they did despite the fear, seen as an obstacle to openness by so many of their peers.

In this case study I have looked into the openness strategies and practices of two open collaboration research projects created by Finnish researchers.

The Social Media for Citizen Participation, or SOMUS project, was created by a loose online collective called the Open Research Swarm. The Swarm operated on a Finnish microblogging service Jaiku, which has since ceased to exist. SOMUS combined methods from engineering sciences and social sciences, co-creating and testing applications and social media platforms with citizen stakeholders.

NMR Lipids project is an ongoing open scientific collaboration project to understand the atomistic resolution structures of lipid bilayers. The discussions happens on a Blogspot based blog, while manuscripts and data are developed on Github. ArXiv and Zenodo repositories are used for preprints.

The primary source of this study are the interviews of two key researchers from each project. My methodological toolbox includes influences from social science history and social psychology of science. For understanding and describing the projects I have used cultural-historical activity theory and it’s activity system model.

The openness strategies of the two projects could be described by the term “open by default”. There were no conscious measures taken to prohibit scooping. The SOMUS project proposal was drafted completely openly. Instead of scooping them competitors aiming for the same funding call ended up actively avoiding overlaps in their proposals, some even contributed to the SOMUS proposal. The tragedy of SOMUS was that sharing funding among the Open Swarm was difficult and created tension, catalysing the collective’s disintegration. Also, the carefree enthusiasm resulted in a lack of concern for data management. Today only a PDF report remains openly available online.

The NMRLP turned the tables and instead of worrying about scooping has made sure that they don’t themselves inadvertently scoop anyone. Participants are expected to credit ideas received even in informal discussions. Anyone who has commented on the blog is eligible for co-authorship. It remains for each participant to decide for themselves whether their conribution is enough. The interviewed researchers told that neither of the projects published articles have free-rider co-auhtors, something that is actually rare in their field.

I found out that instead of being a disincentive, the fear of scooping actually acted as an incentive for one of the projects. For the researcher in question it was a way of getting rid of a constant stress. When you work is published online from the get-go, it is easier to prove priority.

All of the interviewed researchers named the unfairness of the academic publishing model as a motivation for creating alternative ways of disseminating research. One of the projects also wanted to address the normalization of p-hacking style bad practice in their field.

Even though the interviewees showed a level of mistrust for the research community at large, they had a high level of trust in their immediate community.

They worried about advancing their careers and a precarious career stage, meaning lack of funding or permanent position was a key motivation for most, but they were not prepared to compromise their principles in pursuing success. All named scientific curiosity as a source of fullfilment and motivation. They were aware of their pioneer status and excited by it.

Making generalizations based on a case study is always a risky business, but with maybe a half a grain of salt there can be general lessons to be learned here.

The interviews show a link between understanding and recognising research integrity and research ethics issues and the willingness to share. Research integrity training for researchers is already a policy priority at least in Europe and this conclusion only adds to it’s importance.

Exploring beyond one’s immediate research field can foster new ideas, research methods and questions. Again, multidisciplinarity is a research policy staple, but more should be done to make it mainstream.

Data citation principles and practices need to be in place ASAP, so that researcher’s work doesn’t get scooped just because no-one knows how to give proper credit on data.

The open collaboration projects show a worrying lack of engagement from women. The level of female participation was in both cases below of what can be considered typical for the fields: about 25% for SOMUS and 1/30 for NMRLP. The gender-specific concerns in the way of sharing should be recognised and addressed.

The experiences of empowerment, excitement and curiosity that open sharing and collaboration can offer should be communicated more, with less focus on buzzwords, policies, requirements and demands.

This could be done through case examples and champions. But in order to have them, efforts in sharing should be rewarded. One of my four interviewees grew tired of the precariousness of a researchers life and is working as a college teacher, while another is contemplating leaving academia altogether because the numerous pats-on-the-back are yet to translate into project funding or a position.


SciDataCon 2016: Accepted Conference Paper Abstract and Reviewers Comments

As I knew I was going to attend the SciDataCon 2016 in my role as the secretary of the Finnish Committee for Research Data, I decided to submit an abstract for a conference paper for one of the sessions. To kill two birds with four layovers, if you will (flying from Helsinki to Denver and back on a budget is truly an endurance sport).  The session that I felt most suitable in terms of my ongoing research was one titled Getting the incentives right: removing social, institutional and economic barriers to data sharing. I was happy to find out earlier this week that my abstact has been accepted. The reviewers provided some feedback, which I’m thankful for, and requested a few revisions.Below is the abstract (in its original, unrevised form)  with reviewers comments at the bottom. Now all that is left for me to do (in addition to revising the abstact and making it camera ready) is the small task of writing the actual paper. And enduring the gruesome flight itinerary.

Afraid of Scooping? – Case Study on Perceived vs. Actualized Risks of Sharing Research Outputs


Fear of being scooped is among the most commonly voiced concerns by researchers in discussions concerning Open Science in general and Open Data in particular. Does practicing openness make researchers more likely targets of research misconduct? This abstract describes a study based on two cases of “ultra open” science collaboration, with the aim of comparing perceived risks of real time sharing a wide array of research outputs to those that have realized. The focus is on research integrity related concerns. Preliminary findings are promising: for example, an experiment in openly drafting a funding proposal resulted in other teams focusing their projects on different research themes to avoid direct competition.


The current volume of Open Science themed policy initiatives, discussions, events and community action is a clear indicator of the recognition of a need for wider access to scientific knowledge. Open Access to scientific publications is the strand of the movement with the widest acceptance and most success.[1] Progress on other fronts of Open Science, f. e. Open Data and Open Methods (Open Source), has been significantly slower. There are several reasons for this, some due to institutional factors, funding mechanisms or the lack of established workflows. One major factor that stems from the grassroots of research, from individual attitudes and professional cultures adopted by researchers, is fear.

A study commissioned by the Knowledge Exchange network reviewed incentives and disincentives for data sharing. Fear experienced by researchers was at the top of the list of barriers; “fear of competition, of being scooped and therefore reduced publication opportunities.” According to the study, these fears plagued especially early career researchers. They were both afraid of being ridiculed for their immatureness and wary of losing badly needed publications to scooping. When moving up the academic ladder, the possibility of getting laughed at faded in the researchers’ minds, while the threat of scooping persisted. (Van den Eynden & Bishop 2014)

Getting scooped means in the academic world that a researcher (or a team of researchers) beats another to the punch in publishing a research finding. This happens often and only becomes a research integrity offence if one of the researchers/teams got the research idea from the other.[2] Fear of scooping in the context of Open Science can be thus interpreted as fear of becoming a victim of research misconduct.

How well founded is the fear of being unethically scooped due to sharing? What about being ridiculed? Belittling someone’s professional efforts is of course no research fraud, but it also doesn’t exactly make one a shining beacon of integrity either. There is very little evidence on the occurrence of research misconduct, not to mention the effects on the victims. Because of the rarity of sharing research outputs beyond publications, our understanding of the social implications of practicing Open Science are also scarce. As a counterbalance to the fear of scooping there are hopes of transparency through Open Science acting as a cure to at least certain forms of irresponsible research behavior. To encourage more researchers into sharing their research outputs as widely as we need to understand the situation better.

For this case study I have interviewed key researchers from two “ultra open” research collaboration projects. In addition to Open Data, the projects have produced all of their content openly online, inviting and welcoming outside participation. One of the projects even allocated funding to a “research swarm”, an open membership online community operating on a microblogging platform. These two individual cases offer us a glimpse to the challenges and possibilities of research integrity in an Open Science era, as well as the social implications of sharing. The case study is part of an ongoing PhD research project on research integrity regulation in Finland.

Open Collaboration Cases

The two open collaboration cases that form the basis of this study are the NMR Lipids Project (NMRLP) and the Social Media for Citizens and Public Sector Collaboration (SOMUS) project.

The projects were chosen based on the approach of extreme openness towards collaboration and co-authorship. Despite their somewhat radical nature, both of the projects have produced traditional research articles, but for example in the case of NMRLP, the authorship of these publications has been based on self-assessment by the contributors (contributors meaning anyone who has commented on the project blog). Another factor was that the two projects have been coordinated by Finland trained researchers. This is due to the case study being a part of a PhD project focusing on the national Finnish research integrity regulation. This caused for example the Polymath Project to be excluded from the cases.

The NMRLP belongs to the field of molecular physics. It is ongoing at the time of writing, with all of the research outputs available online either in the project blog or GitHub service. The SOMUS ran for two years during 2009-10, although the open online community, Open Research Swarm, that gave birth to the project, predates SOMUS by at least a year. SOMUS was a project in the field of multidisciplinary media studies and all of its research outputs were openly published online during the running of the project. Unfortunately, the outputs were not placed in an open repository post-project and are currently accessible only by request. This is an issue that I will also address in the finished paper.

Research Methods

The preliminary sources for this study are thematic interviews of five open coordinating researchers from the chosen projects, together with the open online content of the NMRLP and the archives of the SOMUS.

This research is multidisciplinary both in terms of methods and theoretical framework, drawing inspiration from behavioral sciences and social sciences, especially sociology of science & technology, with strong roots in social science history, itself very much multidisciplinary in nature. Oral history has been an important point of reference in conducting the interviews.

The goal is to make the collected interview data available for further research use with an open license and through an open online repository, both in audio and textual formats. This is not common practice in qualitative humanist and social scientific research. Sharing qualitative human data is ethically complicated, but I argue that the obstacles are being somewhat exaggerated and the benefits not discussed enough. There is evidence that research subjects can be quite willing to allow re-use of their data, even if the data in question is sensitive (Borg & Kuula 2007), which is not the case for this study. Currently the biggest obstacle for sharing identifiable interview data in Finland is a data privacy ombudsman interpretation a so called broad consent is in breach of law regulating data privacy.

Preliminary Findings

The case study is still ongoing at the time of writing. More definitive results will be available at the time of SciDataCon 2016. Nevertheless, there are some interesting preliminary observations and findings to be shared. Markus Miettinen, one of the key researchers of the NMRLP, compared in a workshop presentation his initial fears when initiating the project with what actually happened. He listed eight risk scenarios, ranging from lack of participants to scooping and personal conflicts. According to him (Laine et al. 2015)

[…] the experiences gained on open research during the NMRlipids project have been extremely positive, and none of the major fears we had before starting the project have actualized. Quite the contrary, the open research approach has proven to be an extremely fruitful as well as rewarding way to do research.

The SOMUS has gathered similarly promising experiences. Opening up research funding proposals, possibly one of the most radical and controversial ideas developed under the Open Science movement umbrella, gets back up in the SOMUS project report (Kronkvist 2011):

Could research funding generally be applied to ‘open calls’? An often-used reason for a closed doors policy is that of competition. Research ideas can be stolen, and project applicants engage in tough battles for funding. It became clear through discussions with other applicants, however, that the open draft actually helped them focus their projects on different research themes to avoid direct competition with Somus. When a text is documented in a wiki, it is easy to find an author and time stamp for the text, making it uncomplicated to solve authorship questions.

To what extent these and other examples drawn from the cases can be generalized and translated to other research fields and environments, will be discussed in the final paper.


The author would like to thank Tiina and Antti Herlin Foundation for their continuous financial support to her PhD research, which this study is one part of.

Competing Interests

The author declares that she has no competing interests.


1   See for example the recent statement made by EU member statements about making all scientific articles freely accessible by 2020: http://english.eu2016.nl/documents/press-releases/2016/05/27/all-european-scientific-articles-to-be-freely-accessible-by-2020

2   Many research integrity guidelines limit the categories of research misconduct to falsification, fabrication and plagiarism (FFP), but for example the Responsible conduct of research and procedures for handling allegations of misconduct in Finland (2013) guideline by the Finnish Advisory board on Research Integrity names misappropriation as a form of research fraud.


Borg, S and Kuula, A 2007 Julkisrahoitteisen tutkimusdatan avoin saatavuus ja elinkaari – valmisteluraportti OECD:n datasuosituksen toimeenpanomahdollisuuksista Suomessa. Yhteiskuntatieteellisen tietoarkiston julkaisuja. Tampere, Finland: Yhteiskuntatieteellinen tietorkisto (Finnish Social Science Data Archive). pp. 37.

Kronkvist, J 2011 Somus as an attempt at a new paradigm. In: Näkki et al Social media for citizen participation – Report on the Somus project. VTT Publication 755. Espoo, Finland: VTT Technical Research Centre of Finland.

Laine, H, Lahti, L, Lehto, A, Miettinen, M and Ollila, S 2015 Beyond Open Access – The Changing Culture of Producing and Disseminating Scientific Knowledge. Proceedings of the 19th International Academic Mindtrek Conference. New York: ACM. pp. 202-205 DOI: http://dx.doi.org/10.1145/2818187.2818282

Van den Eynden, V and Bishop, L 2014 Incentives and motivations for sharing research data, a researcher’s perspective. Knowledge Exchange. Available at http://www.knowledge-exchange.info/event/sowing-the-seed [Last accessed 30 May 2016].

Comments and suggestions by the SciDataCon 2016reviewers

Very concise description of the work done, and the methods. Please have a look at C. Borgman’s recent book about Big Data, Little Data, No Data (MIT), if you have not done so. Have a look at the specific research culture (in the field, institution, countries) which enables, fosters these experiments. What do you think is the probability of such research practices to diffuse more widely, to be adopted by many?
The paper provides a Finland/ Finnish perspective on openly available research outputs – also the project data, for two projects. The content appears interesting and addresses one of the biggest stumbling blocks for data publishing – the fear to be scooped … to become the victim of research misconduct.
Fear of being scooped is one of the major reasons given by researchers as to why they don’t share their data (amongst many other!). This paper reports on 2 case studies in which that fear has found to be unsubstantiated, and that none of the fears of making data available were realised. Whilst this is only a limited survey, having hard evidence on actuality instead of a possibility, is removing yet another social barrier to the string of research data. Although this paper doesn’t discuss incentives, it is suitable for the session as it discusses removing a disincentive. The authors indicate there more results will be available at the time of the conference and these should be included in the paper.

OKCupid-gate: Ethics Accident Waiting to Happen

This is a very spontaneously written quick post inspired, or rather forced out, by the release of some 70 000 OKCupid users’ data (which I learned about yesterday). The data was released by a Danish self-acclaimed researcher and scientist, actually just a graduate student (I say ‘just’ not because I consider graduate students to be lesser human beings,  but because he obviously was not aware nor acting according to the professional codes of the research community, i.e. wasn’t a fully trained scientist). The data has since been removed from the Open Science Framework repository.

What I’m about to say next is a general remark on the current culture among the scientific community, rather than an analysis of the individual case that is the OKCupid-gate.

In a way this feels like an accident waiting to happen. The discussions concerning research integrity and ethics have been lagging far behind the progresses of Big Data, Science 2.0 and Open Science. Both Science 2.0 and Open Science have so far mostly been playgrounds of natural scientists. Yes, there are the emerging fields of digital humanities and computational social sciences, but despite the buzz they remain marginal. Most of the human scientists applying computational methods and digital sources to human science research questions are having to go pretty DIY on their workflows, both in terms of practical and theoretical methods. It is not my intention to put natural and human scientists up against each other saying that one is more ethically responsible than the other. What I am saying is that human scientists have a different, and research-wise deeper, understanding of all things human and social. It’s their job, after all. They are better equipped to understand the 50 shades of open in social media and see the potential harm that personal information “that is already openly available” can do if it’s released as open data.

All these discussions, about computational methods, Open Science, Science 2.0, Web 2.0, research integrity, natural sciences, human sciences etc., are going on in their separate bubbles. It is a terribly slow and wasteful way to proceed. I say it’s about time to start bursting these bubbles. First of all, we need to stop referring only to natural sciences as “Science” (yes, Anglo-Saxon world, I’m looking at you) and make the concept also include human sciences. This would help us to acknowledge that there are certain skills and lessons that every researcher, or scientist, no matter whether they are studying Big Bang the historical event or Big Bang Theory the sitcom, needs to learn. Human scientists have to start acquiring basic computational and research data science skills, and natural scientists need to better understand how their work relates to societal issues.

We should also break the Open Science bubble and make openness (as in accessibility and transparency) prerequisite for good science. This would maybe finally rid us of the weird idea some people seem to have (among them both advocates and opponents of Open Science) about openness being equal to vomiting content to the web, without giving a second thought to issues such as quality (metadata, licensing) or privacy (the OKCupid case).


Informed consent

I am currently working on a case study on responsible conduct of research in the context of open research collaboration. I have chosen three research projects that have been conducted online completely openly, with an open invitation for anyone to particpate. I won’t name the projects yet for the simple reason that I am intending to interview the key researchers of those three projects, but haven’t approached all of them yet, and I feel it would be a little bit tacky if they were to hear about my project indirectly. So I’ll get back to the projects as soon as I have made my plans known to the persons involved.

What I want to discuss here is the pratical aspects of forming informed consent for research subjects. As I am planning on contacting who are hopefully my interviewees-to-be I have been thinking a lot about the information that I owe to them about my project.

Informed consent is a central concept to the ethics of human research, i. e. research on human subjects. Here’s what my wise friend Wikipedia says about informed consent:

“An informed consent can be said to have been given based upon a clear appreciation and understanding of the facts, implications, and consequences of an action. To give informed consent, the individual concerned must have adequate reasoning faculties and be in possession of all relevant facts.”

The concept was first introduced in the domain of medical sciences, but is equally important to research in humanities and social and behavioural sciences, that can often involve minors or deal with sensitive issues, such as domestic abuse, sexual orientations or policital views, just to name a few examples.

My research subject, which deals with people in their professional roles, is not sensitive, but because of my commitment to the principles of openness, requires thorough ethical reflection. There is practically no precedence concerning open unanonymized qualitative interview data, at least that I know of. I will get back to the challenges I have experienced when trying to find a repository for archiving and sharing my data in another blogpost. But before I can archive, let alone share, any data, I need to be sure that my research subjects understand what they are getting involved in and agree to everything that I’m doing.

In order to inform my interviewees I have drafted a project descrption, which can be found as a Google document here. The document has been approved by the University of Helsinki Ethical Review Board in the Humanities and Social and Behavioural Sciences (my second supervisor is the chair of the board, but she recused herself from the decision making in my case). I have followed the Finnish Advisory Board on Research Integrity (FABRI) ethical principles in the humanities and social and behavioural sciences in putting together the document:

Information regarding a study should include at least the following:

1) the researcher’s contact information,

2) the research topic,

3) the method of collecting data and the estimated time required,

4) the purpose for which data will be collected, how it will be archived for secondary use, and

5) the voluntary nature of participation.

Subjects may ask for additional information regarding the study and researchers should prepare for this in advance.

Assenting to be interviewed can be considered as consenting to the interview data being used for the purpose of the research project in question without any additional paperwork. Concerning the archiving and sharing of the data I have decided to ask for a written consent. The consent form I have formulated follows the example given by the Language Bank of Finland (I can’t find the model form anymore after they have updated the website, sorry for that), with some minor altercations and additions. At the moment Zenodo looks like the most likely repository for my data, but as I mentioned above, I will get back to this issue, since it has caused me a lot of headaches. Stay tuned.

New Year’s Resolutions: My Open Pledge

Ever since I heard Erin McKiernan telling about her Open Pledge at the 2015 OpenCon I have been thinking about making my own. It seems only appropriate to publish it at the beginning of a new year. In addition to McKiernan I have been inspired for example by the Open Science Peer Review Oath and the responsible conduct of research guidelines I’m studying.

My research is open by default.*

I will not be open at the cost of my research subjects privacy; when in doubt I will refrain from publishing.

When I don’t share something I will explain the decision openly and honestly.

I will share both my successes and my failures.

I will blog my research.

I will communicate my research in a way that is understandable also to people outside the research community.

I will either publish in full open access journals or traditional journals that allow self-archiving.

I will not publish in so called hybrid open access journals.

I will not publish in journals owned by companies that exploit the research community.

I will be an advocate of open science and speak about my choices.

*See here what I mean by ‘open by default’.

Open by default

This text is based on a short presentation I gave in a webinar organized by Open Access Academy,  EURODOC and FOSTER today, titled Open Data – How, Why and Where? I was asked to speak about why early career researchers should care about open data.

Sometimes I wonder whether we are doing ourselves a disservice by talking about Open Access, Open Data, Open Science, Open Source, Open Notebook Science etc. All of these labels make openness seem like something new, something complicated and something that adds to the burden of researchers, when it is exactly the opposite. To me Open Science is just plain good science, both in terms of academic excellence and research integrity.

Open Access and Open Data are already reality. More and more funders expect that scientific publications done with their support are openly available for maximum impact. Policies are not yet as demanding for research data, but there is an increasing amount of incentives and pressure to make it more openly available too.

In light of this, you don’t need me to tell you that you should care about open data. What I can tell you instead is why you should embrace and practice open data, even go beyond it.

What I’m trying to do with my dissertation is to go beyond open access and open data and make open the default setting of my work. It makes life so much easier compared to the other option.

When I got the research grant I’m currently working on I wanted to put into practice the things I had been trying to advance in my previous job as the coordinator at the Open Science and Research Initiative by the Finnish Ministry of Education.

I tried to dissect my research to make it fit into boxes labeled Open Access, Open Data, Open Methods and so on. It felt forced. None of the policies seemed to apply to my particular case. I didn’t even understand what the word data meant for me: I use non-digital archival sources (that means paper), published books (more paper), statements and guidelines and also collect interview and survey data.

When I asked myself “What can I publish as Open Data?” I hit a wall: nothing.

Nothing that I produce or can produce fill out the requirements of for example the Open Knowledge Foundation’s definition of open data. But instead of giving up and just drawing the conclusion that qualitative social scientific research will forever be a bystander in the Open Science discussions I wanted to find a different approach.

So I rephrased the question asking “What can I NOT publish openly?”. This way I don’t have to justify openness and worry about doing it right according to this or that policy. This new attitude has removed the fear of not fitting in and given me a clear direction. I can concentrate on doing the most interesting research I can, as transparently I can, on my own terms.

There is of course a lot of work to be done and things to figure out, like ethical issues, tools, platforms, formats, metadata, you name it. Fortunately there is a great community out there, ready to give support.

I want to stress that being open by default doesn’t mean being careless. I can do trial and error with my own life but not with the lives of my research subjects. That’s why I will keep on asking the question until my research is finished and keep on getting different answers in different phases of the process. I have put a lot of effort into planning my work and will continue to do so. Whenever I am in doubt about publishing something I’ll take a time-out and proceed only when I can be sure of not harming anyone.

To summarize why you should make openness your default setting:

  • don’t just passively wait what kinds of demands the funders and other makers of research policy will impose on us next, be pro-active and create your own practices

  • be a member of the open science community

  • do good science that has real impact

  • advance your career: more and more recruiters and funders take into account published data, you can also gain traditional merit via citations that your open access articles and published data sets generate

  • stop being afraid of failure: you’d be surprised how many people are interested in your non-significant results, reducing publication bias is also an ethical issue

  • you might not be in the academia for ever, so build broad expertise and a personal brand

Closed for good? – How ethical issues can limit openness

This is a lightning talk I gave at the Knowledge Exchange Pathways to Open Science event in Helsinki today as part of the ‘Benefits, risks & limitations of Open Scholarship’ theme.

I am a doctoral student in economic and social history at the University of Helsinki. One of the research questions I am working on is about the answers that open research processes provide to ethical challenges in research, such as plagiarism, data fabrication and author misconduct.

But today, for the next few minutes, I’m taking on the role of a devil’s advocate.

I argue, that human related data can never be open in a way that is ethically sustainable.

That is, at least if we understand open along the lines of the Open Knowledge Foundations definition, which states that open data is something that can be freely used, modified, and shared by anyone, for any purpose.

I collect interview data for my research. I want to share that data already during the research process. My interviewees are easily identifiable: former chairs and secretary generals of Finnish Advisory Board on Research Integrity and key researchers of certain open research projects. There isn’t much point in anonymizing the interview data, although I’m prepared to go there if they request it. I’m giving my subjects a consent form to sign. By signing they agree that their interviews can be freely used for educational and research purposes, without embargo. Commercial use is not included in the consent, already a breach against the open definition.

The topic of my research isn’t sensitive in the traditional sense, since it deals with people in their professional roles, without going into matters of health or family relationships. I have subjected my research plan to ethical evaluation, which deemed it ethically responsible.

Do I feel like I can declare with certainty that no harm will come to my subjects because of my research? No, I can’t. We live in a world where researchers studying subjects such as (and these are real examples from Finnish research) nutrition, wolf populations and indoor air problems get death threats. The Internet is an unpredictable and often unkind environment.

The Helsinki University ethical review board that evaluated my research gave me two questions to ponder:

“How does the choice to not anonymize the interview data affect the quality of gathered information (sample, content)? There is a danger, that ethically critical aspects will not fully surface due to fear of labeling, leading to a subdued result.”

I understand this concern, but in the case of historical research and oral history, if we hide the identity of the speaker, we might hide the historical context, and in so doing destroy the historical value of the data.

The second question is even more haunting.

“Could interview data that is in principle harmless give rise to new sensitive information on research subjects?”

This one is giving me sleepless nights.

Currently a lot of human related research data is being routinely destroyed due to privacy concerns. All the while private companies are collecting vast amounts of human related data from citizens who don’t really like giving up their data and certainly don’t trust these companies, but are just too resigned and without alternatives (because digital has become the prerequisite of social) in order to resist.

So what do we do? Change the definition of ‘open’? Change the definition of ‘ethical’? Accept as inevitable that at least some human sciences get left behind? Or is there a way we can move from closing human data for good into opening it for good?

To rephrase the question: how can we build a culture of trust and what kind of mechanisms are needed to support it, so that we can preserve qualitative human data to generations to come?


Can too much openness ruin a research interview?

Interviewing as a method of acquiring research material calls for a lot of sensitivity. When doing a research interview, you try to influence the person you are talking to as little as possible. If the other one is searching for right words, you don’t jump in with helpful suggestions, like you might normally. You certainly don’t try to convince them of anything. The questions posed should be as neutral as they can, allowing a wide array of different possible answers. Instead of asking “Was it like this?” you go “What was it like?”.

But what if one, like me, is trying to do research openly? The work plan is published for everyone to see, revealing some hypothesis and other preconceptions about research outcomes. The risk that this information will influence the interviews and through that the research results is to me very real. I am an active advocate of open science, so writing the following words is not the easiest: at least for those of us who call ourselves social scientists, there really exists a thing called too much openness, and it can jeopardize the validity of our research.

In addition to (contemporary) archival documents, interview data is an important part of my source material. I talked the problem of too much openness over last week with my supervisor, Erika Löfström. We didn’t find a simple fix-it-all solution, but instead came up with a concept called “hallittu avoimuus”. Controlled openness would be the literal translation, but to me it sounds like a euphemism for anti-openness, whatever that could be. I prefer the term conscious openness. What it means is basically that it’s good to pause and think before pressing “enter”.  Not a revolutionary idea, I know, but I’ve noticed that saying aloud seemingly obvious things can often prove surprisingly fruitful.

To me openness is not a value in itself, but a means to an end. For example my primary motive for this blog is not attention just for the sake of attention. The point of open science and research on a systemic level is to increase the quality of research, strengthen the role of evidence based knowledge in the society, make research more resource efficient and more accessible. On an individual level the benefits are networks and community, ideas and feedback, as well as increased impact of one’s work (none of these of course come for free, but that’s another blog post).

In order for science to improve through openness, we need to be conscious about what goes out there. That it’s information, not just noise. That it’s not counter-productive. Like data without proper metadata is just numbers, a research plan that becomes a self-fulfilling hypothesis is just letters (and that’s the positive scenario). No new (reliable) knowledge gets created in either case.

Conscious openness is actually very close to what anyone dealing with human subjects has had to practice already since long. A concept called informed consent is at the core of ethical research with human subjects, both in invasive medical research and non invasive social scientific research. The subjects need to know what they are getting themselves involved in, which is easier said then done. How to inform the subjects without affecting results? How to tell them, often non-experts in the field involved, about complex research questions so that they really truly understand all of the aspects? How to do this in a way that doesn’t scare them, insult them or bore them to death? It’s not enough to explain about the premise of the research, you have to also give detailed accounts on data management. This is often where ethical review steps in.

Open research as a process could take lead from the practices of forming informed consent for research subjects: What does this public have the right / need / interest  to know? How should I choose my words in order to avoid misunderstandings? The biggest difference to earlier practices is that open becomes the default setting, from which you refrain only with a good cause. A key aspect of the process, one that determines whether the research in question is legitimately open or just being open-washed, is transparency concerning the various bits and pieces you are not publishing and why.

How shall I be putting this conscious transparent openness construction of mine into practice? Right now I’m frozen in the finger-on-enter-but-pausing-and-thinking phase as I’m preparing to collect interview data. I will publish an updated work plan and a post detailing the intellectual whereabouts of my work sometime during the coming weeks.