Is it safe?

5 minute read Updated:

‘I’m a person experiencing X. What solution will help? Will it work? Is it safe? Is it worth it?’

Any person who is dealing with an issue and wants a solution is likely going to implicitly or explicitly ask these questions. Arguably, the central purpose of the health research and clinical enterprise is to provide people with answers to these questions. The traditional way answers to these questions are found is through the use of population-level research that includes appropriate study designs, such as randomized controlled trials, data, and statistical analyses, such as averages, outliers, and population-level statistical inference.

The approach for answering these questions is, by necessity, going to be different for patients. This is because population-targeted research do not function well or are unavailable for individuals. I’ve heard some argue that solutions patients have created for themselves cannot be effective or safe because they have not been evaluated using population-level tools.

However, the excellent results individuals experience in the OpenAPS community (see these papers 1, 2, 3, 4, 5, and 6) and the high degree of safety that has been maintained among OpenAPS’ers suggests otherwise. Considering individuals who have created their own OpenAPS and continue to use it despite (in the US, but not all countries) a now-available commercial artificial pancreas on the market, it seems that the system is effective and safe enough for them to choose to keep using it. This level of safety and effectiveness is not occurring by accident; the group is using a different process for answering the above questions, which highlights the possibility of multiple ways to answer the above questions, though this point is not well acknowledged in the current health research enterprise.

While there are likely many ways, one pattern that I think is particularly powerful for achieving safety and effectiveness for individuals is through personal agency and communal tuning and pooled responsibility.

In terms of personal agency, a core message that I heard from ‘people formerly known as patients’ who took part in our convening was: “no one is coming.” When framed this way, not acting would become an explicit choice that the status quo was their best option. So each patient we encountered decided themself that they couldn’t rely on others to create solutions for them and, instead, they must be personally agenic for taking part in creating the solution or pursuing whatever is right for them. Personal agency is a powerful starting point that the current healthcare research enterprise cannot use or assume and it’s a an incredible strength for patient communities. Why? Each person has a strong motivation to get the safety and risk/benefit balance right as, ultimately, they are each individually responsible for the solutions they create for themselves and, not surprisingly, no one in the community wants to be harmed by the tool they co-create, which sets up a sort of pooled responsibility. For many, this can be too much to ask (hence the logic of traditional research), but for those who are willing to take on this personal agency, we shouldn’t undermine its power as a starting point for achieving safety.

Whenever a problem is complex, it is very hard for any one person to fully understand and grasp the problem, let alone figure out how to create solutions for the problem. Because of this, personal agency may not be enough to produce safe and effective solutions. Communities working together towards both personal and collective benefit, as is increasingly common in open source communities, is a powerful complement to personal agency. From the perspective of safety, a community that is pooling together it responsibility to provide checks, balances, and, ultimately “tuning” of knowledge, expertise, and industriousness across a group is a powerful balance to personal agency. I wrote about this idea in a previous blog but to briefly recap the example within the OpenAPS community, I see a few things going on that facilitate tuning towards producing safe and effective tools. First, there is a shared intent on everyone personally benefiting from more effective management of blood glucose levels among persons with type I diabetes. Second, there is an emphasis on personal agency (as already discussed). Third, there is an iterative process of checks that occur when vetting an idea to bring the wisdom and diversity of insights across the community to bear when creating tools that includes data. Specifically, the community engages in a long series of tests starting with a new idea written out in plain language and the community discussing the intended and unintended consequences. Next, the idea is written into code and, again, there is community checking/tuning to achieve the goals. Following this, Dana (or some other super-experienced OpenAPS’er) tries it on themselves in a highly controlled environment, often with someone (e.g., Scott) also monitoring for safety. Results are shared and further discussed. When everyone feels comfortable, a few more advanced users try it out, and so on and so on. The community continues to tune their collective understanding to facilitate progress towards safe and effective tools. Because the community is diverse, in terms of knowledge and expertise around type I diabetes, the community brings to bear a wide range of information including the external evidence (i.e., the scientific insights from traditional scientists), clinical expertise, and self-knowledge of the community.

To summarize, one way of achieving safety and effectiveness for patients, that does not require population-level tools, has three things. First, people are willing to accept agency over their lives, which enables them to act. Second, a community of self-driven individuals may form to help each other to do what they want to do safely and effectively together, while still maintaining each person’s autonomy, which is common practice in well-functioning open source tools, thus suggesting the breadth of this approach. Third, data are available to provide information about whether different solutions are helpful vs. not (which can be supplemented with appropriate n-of-1 study designs, when needed). Central to this, the community is not there to “stop” a person, but, instead, to help each other, as equals, think through intended and unintended consequences and to create a place of pooled responsibility and understanding. Ultimately, the decision to act or not is made by each individual, including to remain a part of the community or, in open source terms, to fork their efforts into something that is more personally beneficial. This is a powerful basic structure to which other patient communities could look for guidance and support for producing safe and effective tools.

Leave a Comment

Your email address will not be published. Required fields are marked *