Think about the word ‘ethics’ for a moment. For some, the word creates images of smiling people sitting around a table, the picture of diversity, happily planning a future in which no one is ever taken advantage of. For others, the image may be of nun-like ascetics peering over your shoulder with an armful of paperwork tied together with a pretty bow of red tape. For still others, it’s something heartily discussed in a liberal arts course or late-night dorm philosophizing during doe-eyed college days. In reality, though, practicing ethics is never as clear-cut an image and making ethics part of daily research life is still a distant goal.
Some fields, like genetics and medicine, have had to confront ethical conundrums head-on and consequently, create a precedent for how we think about ethics in a research and institutional context. Sadly, this precedent is full of angry conflict, covering ethical missteps after-the-fact, and millions of dollars worth of lawsuits. This precedent rightfully leaves many people jumpy about addressing ethics head-on, like the proverbial third-rail of program management that no one dare touch for fear of inviting the flak created in these precedent cases. To use another cliched analogy, ethics then becomes the elephant in the room, except this elephant is staring at you over your cubicle wall and periodically sticking its trunk over the wall to search for peanuts. In reality, choosing to not address ethics amounts to consciously deciding to accept whatever emerges organically, whether you like it or not. So what does this mean for less life-or-death fields that work with stakeholders, like the marine sciences? Let’s start with the foundation that’s already laid.
Research ethics with human subjects, especially in medicine, sets the precedent. Let’s run quickly through the legal requirements:
The Nuremberg Code (1947)
The Code was created in 1947 as a result of the Nuremberg Trials, which including the sentencing of Nazi doctors for experimenting on Jewish patients during and before World War 2. Now considered fairly basic medical practices, there are still recent and recurring cases of violations. Though the Code contains 10 distinct parts, including discussions of risk and partial participation options, most people know only the first – informed consent. In short, scientists must provide potential volunteers with the tools “to make an understanding and enlightened decision” about participation. This consent is also the part that is most applicable to the social science methodologies that these laws now apply to.
The Declaration of Helsinki (1964)
Put together by the World Medical Association, the Declaration is aimed primarily at physicians. However, it emphasizes practices around identifiable human material and data. This data could very well include recorded or transcribed interviews in addition to the blood samples it was targeted for. It also advocates for cultural sensitivities, requiring adherence to local laws, rules, norms, and cultural practices around the subject at hand and that underrepresented groups “should be provided appropriate access to participation in research”. Most importantly, the Declaration required research ethics committees to oversee required practice, make sure that particularly vulnerable populations are protected, and that the research maximizes benefits while minimizing risks.
The Belmont Report (1979)
The Belmont Report represents our most modern thinking on human subjects research – and explicitly includes the social sciences as subject to the same expectations as clinical trials. It was written by National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research shortly after its creation through the National Research Act. It re-emphasizes informed consent, with further definition and examples for particularly tricky cases. More importantly, it guides projects that lie on the boundary between research and practice towards ethical oversight – “the general rule is that if there is any element of research in an activity, that activity should undergo review for the protection of human subjects”. It also clearly defines three aspects of ethics that should guide thinking in both research and practice – respect for citizens, beneficience, and justice. In short, interacting with the public should be through a relationship of respect in which both participants and society gain knowledge and that knowledge is shared with everyone who may benefit from its findings.
These laws are aging, and the memories of the motivations for their creation are fading. Ethics in practice today revolves around Institutional Review Boards (IRBs) that check to see if research is line with the principles decided upon by ethics experts of foregone days. These IRBs are typically housed within universities and the definition of ‘research’ is mostly formed through whether the results will be published in peer-reviewed literature. The burden of checking for ethical oversight is placed on those journals, many of whom shirk that responsibility or don’t know they have it. There is great variation in the actions of different IRBs towards the same research, and accusations of political maneuvering occasionally surface. That is to say, the IRB system could stand some major improvements and perhaps a realignment to its previous trajectory of expansion to more and more kinds of applications. However, the basic motivations – to check on the criteria laid out in the Belmont report of respect, beneficience, and justice – are still quite valid.
Practically speaking, IRB generally takes course over roughly a month (depending in the IRB and how over-worked they are), beginning with investigators filling out an applications. The application requests information on methods, plans for receiving and documenting informed consent, and a cost/benefit analysis focused on benefit sharing and potential risks associated with the research. There is also a checklist for the kinds of subjects that will be recruited – prisoners, children, indigenous groups usually among them – to make sure that they are “protected against the danger of being involved in research solely for administrative convenience” (Belmont Report). If researchers aren’t planning on recruiting these vulnerable populations and risks are minimal, the research may be exempt from full review. This means that if appropriate plans are in place for informed consent and data storage/sharing, the research may continue. Otherwise, a full panel meets to discuss the research, weigh the benefits and risks, and may require certain actions as part of the review process to ensure protection of human subjects. Often, this discussion is about data use when the data involves identifiable information and intellectual property rights. Over time, researchers tend to form relationships with their IRB agent and the whole process can be relatively painless and even help better develop the research methodology to maximize benefits.
Taking off the rosy-eyed glasses:
Science is progressively becoming more public and we are again in an era of precedent-setting in the arena of participatory research. The line between research and practice has become more blurred and IRBs so affiliated with a university setting, many researchers don’t even know they need to talk to one. I’ve discussed my personal experiences with this before, regarding citizen science as a field and what happens when research occurs outside the academy. Yet, as is common with ethical boundaries, negative experiences are more publicized. The most public of these was the blogosphere reaction to uBiome.
uBiome is a crowd-funded project that allowed volunteers to pay a small amount to sequence the genome of their microbiome. Basically, you swab parts of your body, put them back in the kit and send them to uBiome for sequencing. You also answer a survey about personal habits, demographics, social connections, and health. uBiome promotes the idea that you can come up with hypotheses about the relationship between these variables and your microbiome that can be answered through the uBiome database. They didn’t go through IRB review before asking people to donate money to the research, and as a private company providing a service, this opened up all sorts of questions around what a modern IRB should look like. That’s at least how uBiome responded to a number of bloggers launching serious ethical concerns about sharing of very personal data.
There’s much more to read on the specifics of this case, but I’ll point to a couple of summaries that lead us to think about the future of ethics in citizen science. First, informed consent requires transparency, and accusations that uBiome overstated benefits and waffled around use of identifiable data prevents full ethical review of their work; this line of logic concludes with uBiome as “a cautionary tale for citizen science”. Second, ethics like “informed consent and proper oversight are things that need to be done right every time“, even when a project seems relatively harmless.
The history of ethical missteps is not restricted to the clinical sciences (although many of the most egregious come from that side of research). The long history of social science around indigenous and traditional knowledge is littered with cases of intellectual property rights appropriation. Classic examples include asking an indigenous guide to point out medicinal plants, then returning home, synthesizing the active compound, making millions and returning none of the profit to the guide or the community that shared their knowledge – a process now dubbed biopiracy. People return to the ‘justice’ part of ethics when addressing biopiracy, and a number of international protections of indigenous rights, including their knowledge, have emerged.
Having no policy invites both criticism of a given project and provides a dangerous opening for science to get ahead of ethics then turn around and regret it. New forms of science will require new forms of thinking about ethics. They very well might be grounded in protections of different types of knowledge, since that is what human dimensions of the environment seeks to use to produce better conservation practices.
Future Rules of Engagement
Human dimensions research shares many of the ethical concerns with historical research, but also opens all kinds of new doors and includes a whole host of people whose needs must be balanced. Two big discussions are already underway in the citizen science community, since that is the one aspect of human dimensions that exists new scientific space.
The second discussion combines participation of vulnerable groups (since you never really know who you will recruit to citizen science) and setting the standard for research behavior in all participants. Participant behavior impacts the reputation of the whole project or sponsoring organization. Vulnerable groups perhaps should be included, in the name of diversity and getting a wide range of perspective and experiences heard in data interpretation – but that also requires being tuned in to cultural sensitivities and ensuring protections of historically underprivileged groups. On the flipside, participant behavior also impact reputation, so some discussion of sanctioned activities needs to occur at the outset of the project. Many anthropologists have navigated these reputation issues through community contracts, now the official policy of the American Anthropological Association, that explicitly attend to community and researcher needs, risks, and rights.
New issues will surely crop up as human dimensions and citizen science research continues to develop. Hopefully some best practices will emerge as groups learn from each other. In the end, though, the message is clear – there is no right way to approach ethics, so long as you do. Remember, it’s about respect.