Development of an Intervention to Achieve Optimal Trust in Automated Insulin Delivery
In the future, patients with Type 1 Diabetes mellitus (T1DM) in Germany will have access to several systems designed to estimate blood sugar levels as well as apply insulin automatically and accurately. One advantage hereof is that while individual patients react differently to certain foods, exercise activities and even the medication itself, these systems can learn individual differences and react quickly as well as adequately.
However, this is only possible if a system is not constantly being disturbed and manipulated e.g., by being deactivated or overruled. In strong contrast to the ideal learning environment of such systems, two aspects must be considered. Firstly, T1DM patients oftentimes have managed their chronic illness on their own for a long time and thus may hesitate to change their working methods. Secondly, the stakes are high, as both hypo- and hyperglycemia can severely impact a patient’s ability to go on about their day. Therefore, any disease management technology faces the challenge of users who may be wary of the system. At the same time, there is large potential in a more automated regulation of blood sugar, which would mean less workload for T1DM patients, who often must spend a significant amount of their time on disease management.
A higher user workload resulting from underutilization due to i.e., lack of trust or a fear of skill degradation, is a well-known problem in Human-Automation-Interaction. Although it is not clear wherein the problem lies exactly, we propose its negative impact can be mitigated through experiencing trust-building situations, e.g., partaking in challenges in which one is forced to rely on the system, which in turn successfully manages the situation or improves one’s understanding of how the system works. A detailed exploration of patients’ experiences in their interaction with intelligent disease management technology is therefore needed to examine why patients might interfere with the systems’ training, and under which conditions they do not feel adequately informed of a system’s processes. An appropriate approach is the use of structured interviews informed by the latest research on trust in automation in the medical domain, further supplemented by research regarding subjective information processing awareness.