Academia

The therapeutic caveat: Prohibitions on manipulation and persuasion in the EU AI Act

man in therapy

The European Union’s approach to regulating artificial intelligence in the EU AI Act reflects a balance between harnessing AI’s benefits and addressing its associated risks. In the law’s final draft, prohibitions on manipulation and persuasion ‘shall not apply to AI systems intended to be used for approved therapeutic purposes on the basis of specific informed consent’.

This provision appears to carve out an exception for therapeutic uses of AI, recognising that manipulation or persuasion might be beneficial or necessary for achieving therapeutic goals in some cases. This is particularly salient in fields like mental health, where AI might assist in behaviour modification or treatment adherence. With over 10,000 mental health apps currently on the market, protecting against sharing sensitive consumer data or ensuring a quality standard is met would be crucial for services like chatbot psychotherapists, but that is not currently the case.

At the same time, permission for therapeutic manipulation could potentially be exploited if broadly or ambiguously interpreted. Companies may attempt to stretch the definition of “therapeutic purposes” to include uses other than actual therapy or integrate therapeutic claims into their terms of service to justify manipulative practices.

Given that most mental health apps have not been clinically validated, this poses significant risks. It would be helpful to have a clear and clinically informed definition of what constitutes “approved therapeutic purposes” and what types of manipulation are permitted within this exception.

The EU’s Medical Device Regulations 2017 regulates therapeutic digital services for health purposes. We believe that it is essential that systems with the ability to manipulate or persuade their end users are quality-assured and held to high standards of effectiveness for the user’s health benefits. But without a clear definition of ‘therapeutic purposes’, European citizens are at risk of AI providers using this exception to undermine citizen’s personal sovereignty.

Potential harms from improperly used therapeutic exemptions

The term “therapeutic purposes” could serve as a convenient veil for ulterior motives. A therapeutic narrative can be employed to exert influence, which could be ideological or political. Any AI language model performing a therapeutic role will further be able to manipulate and persuade users in ways that maximise commercial objectives, whether through engagement or by affecting preferences towards certain products, organisations or services.

For instance, the plethora of ‘brain-training’ apps and programs often primarily serve the commercial imperative to maximise engagement while promising to fine-tune the mind. These platforms keep users returning, an engagement loop which can skirt close to addiction.

In addition to the points raised, the commercial allure surrounding apps and programs like ‘brain-training’ is significantly driven by the industry they foster. The sector witnessed a compound annual growth rate of 20 to 25%, escalating from a market value of $1.3 billion in 2013 to an anticipated excess of $6 billion by 2020​​. This lucrative trajectory underscores a commercial impetus to maximise user engagement, potentially at the expense of authentic therapeutic value. The commercial entities behind these apps project the promise of cognitive enhancement, although the scientific consensus regarding their effectiveness remains equivocal at best.

Given the therapeutic intentions of a number of current and future entrants into the AI market, it is key that European citizens are protected from the impact of these platforms. After all, human therapists are held to extremely high ethical and clinical standards, given their access to individuals at their most vulnerable. We should expect no less for artificial intelligence systems.

A clear definition and scope for “therapeutic purposes”

The EU AI Act could provide a definition of what constitutes ‘therapeutic purposes’ in the context of AI and, therefore, which systems benefit from the exception for persuasion and manipulation. This is especially important because the term  “therapeutic purposes” is not defined in the EU MDR, which leaves it open to interpretation.

One option could be to require companies to substantiate their claims of therapeutic purposes by providing intended purpose statements that clearly outline the modes of persuasion and manipulation and why they are central to the therapeutic purpose. This could help protect against cases where the persuasion primarily increases engagement with the AI system. 

We think the existing EU MDR’s definition and means of assessing the statement are an excellent starting point. Intended purpose is assessed “according to the data supplied by the manufacturer on the label, in the instructions for use or in promotional or sales materials or statements and as specified by the manufacturer in the clinical evaluation”. Companies could be required to support such a statement with evidence. An online portal could be developed where companies submit the necessary documentation and where approved AI systems are listed for public review.

In addition, we recommend that:

  • The definition encompasses both software-only AI interventions and those combined with medicinal products.
  • The guidance explains how this interacts with existing equality laws, as well as human rights;  this could be achieved through a set of illustrative case studies demonstrating the application of these laws in various scenarios.
  • Technologies that serve therapeutic purposes but do not qualify as medical devices, termed ‘borderline products’, require careful consideration. This could happen through dialogue with EU MDR legislators to understand which products might be in this category and to ensure proper oversight.

Sufficient external pre-market oversight under EU MDR

The existing decentralised system for licensing medical devices in the EU enables companies to evaluate whether they qualify and their risk classification. While this approach is likely necessary for a sector with an excess of 500,000 products,  there are potential issues. Companies claiming their technology is a class 1 (the lowest tier) can self-certify, whereas technologies considered higher risk (class 2a or above) must meet increasingly higher safety standards and have their CE mark approved by a notified body that reviews their technical documentation, such as clinical trial results. Lack of external approval is a significant risk to people using such technologies.

Insufficiently reviewed mental health apps from relevant professionals

This is why we are concerned by the potential for technologies to claim the therapeutic purposes exemption while claiming class 1 medical device status. It would mean that technologies could be placed on the market without independent oversight or validation of benefits.

While we think it is highly unlikely that any such technology would be class 1 due to the introduction of Rule 11 in EU MDR Annex VIII, we think ensuring this loophole is not exploitable is vital.

Championing the highest standards is a European value

Closing the therapeutic loophole would ensure the safety of millions of European citizens and encourage evidence-based innovation. Incorporating a definition into the EU AI Act would ensure adaptivity and resilience, leading the EU to champion an era where AI not only augments healthcare but does so with the highest standards of ethics, efficacy, and safety. All values that are very European.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.