A new generation of implantable AI brain–computer interfaces devices (advisory system) have been tested for the first time in a human clinical trial, with significant success. These AI predictive implants detect specific neuronal activity patterns, such as an epileptic seizure, and provide information to help patients to respond to the upcoming neuronal events; as such they are advisory system. By forecasting a seizure, the AI device gives control to patients on how to respond and decide on a therapeutic course ahead of time. In theory, these AI advisory system implants could be used for a large range of clinical and non-clinical application; such as augmenting and empowering agential cognitive capacities (i.e. reasoning, learning, decision making information retrieval and analysis), but also predicting unwanted outcomes (i.e. depressive episodes; addictive habits, socially reprehensive conducts). Being advised by an implantable AI system can positively increase individual’ quality of life; however, doing so does not come free of ethical concerns. There is currently a lack of evidence concerning the various impacts of invasive AI brain implants on patients’ decision-making processes, especially how being in the decisional loop impacts patients’ sense of autonomy. This presentation addresses these gaps by providing data that we obtained from a first-in-human clinical trial involving patients implanted with advisory brain devices. This presentation explores ethical issues related to the potential psychological harms from an AI device that ‘knowns better’ than the implanted individual.
Frédéric Gilbert focuses on bio-ethics. He is an expert in neuro-ethics. He is not a scientist. He is a philosopher. By monitoring patients with brain devices, Dr Gilbert grapples with the ethical questions posed by invasive brain technologies. His research informs the debates that guide policy regulation, especially in regard to human clinical and experimental trials.