As Dana highlighted, a problem exists: “patients” and “subjects” feel like they are the subject of conversations about science, not part of those conversations. For all of the reasons Dana highlighted, this is not good for people formerly known as patients/subjects (henceforth, just called a person or people).
As I’ve thought about this, I’ve come to what some may view as a radical conclusion:
Excluding humans with lived experiences about an issue, such as experiences with an illness, hurts the trustworthiness of scientific consensus related to the issue.
To understand how, it’s important to understand the concept of scientific consensus. Naomi Oreskes’s TED talk, Why we should trust scientists, provides an accessible overview. I personally like a complementary but a bit more technical formulation from Miller. Miller argues that scientific consensus is knowledge-based (or what I’m calling trustworthy) when the following three independent conditions are adequately present:
The Social Calibration Condition - all parties to the consensus are committed to using the same evidential standards, formalism and ontological schemes; The Apparent Consilience of Evidence Condition (henceforth we will call this one the Convergent Evidence Condition) - the consensus is based on varied lines of evidence that all seem to agree with each other; The Social Diversity Condition - the consensus is socially diverse.
What does that mean?
People need to want to work together (social calibration condition), use a wide range of methods to understand an issue (convergent evidence condition) and include a diversity of different perspectives and explanations when trying to understand an issue (social diversity condition). Each of these conditions exists in a gradient, meaning you can always have potentially more or less social diversity, for example. The more each of those conditions are true when creating consensus, the more trustworthy that consensus is likely to be.
How is this linked to Dana’s experience and why does it matter?
First, people’s lived experiences are one way of knowing a phenomenon (convergent evidence condition). Second, people’s explanations about what might be going on in their lives expands social diversity (social diversity condition). If the scientific community can find ways to meaningfully integrate lived experience and plausible explanations from people (social calibration condition), then any consensus that emerges will likely be more trusted by people, particularly those who will use the information.
(And as a quick aside, some researchers might be thinking right now, “we do ethnographic and other qualitative work to incorporate this lived experience into our thinking.” Yes, that is true. What would happen if people themselves had ways to translate their lived experience into plausible observations that could be incorporated into scientific discourse? This line of thinking is complementary to, not supplanting, ethnographic and qualitative work.)
Let me unpack that a bit.
What is better understood in a person’s lived experience that is lacking in other methods (convergent evidence condition)? History and context.
A person’s lived experience is the accumulation of their life. Even the best datasets do not include all of this information. Further, a person has a far more intimate understanding about their own context. For example, a researcher looking at someone else’s data may see a “weird” data point and label it an “outlier” to be ignored. For the person with the lived experience, that data point can easily be understood in context, such as “oh yeah… that’s when I went to my sister’s wedding; I ate too much because the cake was SOO good.” The more “outliers” (to the researcher), which are understandable to the person, the more important it is for a person to be able to contextualize these data. Without people contextualizing there data, it is quite plausible that, quite literally, these outliers data points will be completely excluded from analyses or adjusted in some way to fit the expectations of the researcher. This, in a microcosm, is just like what Dana illustrated in her patient in the cage visualization related to people’s “data” not getting incorporated into the scientific evidence-base.
The insights about history and context from lived experience are complementary to other methods. For example, lived experience can contextualize traditional methods such as epidemiological research or clinical trials. When lived experience and traditional methods are combined, they offer a broader understanding of a problem and, thus, convergent evidence condition is strengthened.
What is gained from listening to each person’s explanations (social diversity condition)?
For science, if lived experience provides a person with a unique understanding of how history and context impact their lives, then their explanations could also be unique. This means that social diversity of possible hypotheses to explain a phenomenon increase when people are involved. With fewer unique perspectives, a confirmatory circle of consensus can emerge between professionals that is contradictory to people’s experiences and explanations. Again, this is exactly what Dana illustrated in her patient in the cage diagram. This would be bad for science as alternative hypotheses and explanations would never be explored and supported or ruled out.
There’s also a social/cultural dimension here that may be even more important.
If people don’t feel heard, even if scientific consensus is technically right, a person will likely not trust it.
For example, imagine a person meeting with a medical doctor and raising concerns about vaccinations. The person offers what, in their mind, is a plausible causal explanation on how vaccinations could increase the risk of autism for their child. If the doctor responds by saying something to the effect of, “that’s not right; just trust the science,” then the person won’t feel heard or acknowledged and their beliefs will, in all likelihood, further move towards thinking and believing anti-vaccination arguments.
The alternative is to listen.
I know firsthand as I’ve had conversations with people flirting with anti-vaccination ideas. Rather than dismiss them, I listened carefully and slowly but surely worked through each part of their explanations and beliefs, often using their own data, explanations, and evidence as the starting point of our discussions, with the goal of checking assumptions and the veracity of each claim. I consciously worked through this, never rejecting a notion out of hand, but, instead, considering it carefully on its own merit relative to what might be known from other sources of information (i.e., I used the convergent evidence condition principles). Through those conversations, I’ve helped some people feel comfortable vaccinating their child. I did this by honoring who they were as humans and the good intent they had. They weren’t bad people, they just hadn’t worked through the details enough.
In my view, listening just might be one of the key ways to increase trust in science more broadly across society.
Put this all together, and, in my view, there’s an important need for science to meaningfully include those with lived experience in scientific discourse.
To be sure, this line of thinking requires a lot of work.
What are the methods and processes that can be used to formally incorporate lived experience into scientific discourse? What are the assumptions that undergird it as a source of evidence and explanation? What are the biases that are baked into the insights gleaned from lived experience? What are the complementary methods with complementary assumptions that balance out lived experience? How are person’s explanations articulated and/or elicited such that they can be part of the scientific discourse? How do we build a space for robust social calibration that includes not just professionals but also these individuals with lived experience who seek to join the discussion? How do we determine a “valid” perspective and, thus, exclude or rule out misinformation, #FakeNews, and the like? How do we have conversations with people who may be experiencing the Dunning-Kruger effect (i.e., they have an inflated sense of confidence that is not linked with reality)? How might we be able to identify those who are thoughtfully engaging in the discourse vs. those who may be trying to only benefit from it (e.g., for financial gain or for social status and prestige)?
These are not easy questions to answer. But, the current default option in health sciences (and, likely science in general), which largely ignores those with valid lived experiences, is not right, thus setting up the need for us to work through these thorny issues.
The time is now to figure out how to include persons with lived experience in scientific discourse.
ShareTwitter Facebook Google+ LinkedIn
Leave a Comment
Your email address will not be published. Required fields are marked *