Factors to consider related to AI therapy

Willyam Bradberry/Shutterstock

Source: Willyam Bradberry/Shutterstock

In the first part of this post, we discussed how to develop an effective AI therapist in the near future. In this second part, we discuss some additional considerations of an AI approach to psychological therapy.

Some Potential Benefits of AI Therapy

In a world with AI therapists, many physical and time limitations of current mental health care will be lifted. Patients can receive therapy whenever and wherever they want, including in locations where current access to mental health care is poor. There would be no more waiting time before you can see a therapist. In crisis situations, patients could have immediate access to therapy. Perhaps this would help prevent escalating catastrophic situations.

Imagine a world where everyone would have easy and unrestricted access to therapy. Would people be less likely to develop serious mental illness if they had access to mental health care throughout their lives?

Furthermore, human therapists will be able to focus on working with the most difficult patients, while the AI ​​therapists will be able to focus on therapy for the vast majority of people with mild mental health needs.

An advantage of AI therapy could be that the information exchanged during this therapy can remain completely private. An additional feature of the therapy could be that a counseling mental health professional can assess the therapeutic process to provide further feedback to the patient, help refine the AI ​​protocol or deal with very difficult psychological situations.

An AI therapist, without time constraints, will be able to easily adapt therapy to the needs of individual patients, never forget what the patients have said and remain non-judgmental (Fiske, 2021).

machine learning can lead to the development of new species psychotherapyfor example by combining current forms of therapy and perhaps by innovation, similar to how chess-playing AI developed new strategies for playing chess. By studying the results of AI therapy, we can make exciting advances in our understanding of human psychology and how to effect therapeutic change.

Some Serious Potential Negative Consequences of AI Therapy

Like any therapy, AI therapy would not be right for everyone in every situation. Perhaps prospective patients would be screened as a first step to determine if a referral to a human therapist should be made and within what time frame.

The fear loss of confidentiality can make some patients hesitant or resistant to AI therapy. For example, they may wonder if data from their encounters is used for: marketing, including targeted advertising, espionage, or other nefarious purposes. There may be concerns that the data will be hacked and even misused for ransom.

People may also be concerned that someone else can access their AI therapy data by logging into their account. Fortunately, AI facial recognition protocols can prevent such a breach of confidentiality.

Will the ubiquitous access to AI therapy leave some people feeling that there is no “safe place” to spend time with their therapist, away from the pressures of the world, such as the therapist’s office? Conversely, others may feel that there is no “safe space” away from their therapist, who in theory could follow them from any computer.

The questions of AI confidentiality and ubiquitous access are questions we should already be grappling with, given Alexa’s continuous monitoring of verbal interactions in our homes.

Some patients may be put off by the visual appearance of an AI therapist. Patients may also be baffled by the process of undergoing reality tests performed by an artificial therapist.

Ethical concerns regarding the ability to agree to therapy will apply to patients who may not have the mental ability to understand that they are working with a non-human therapist (e.g., the elderly, children, or individuals with intellectual disabilities ).

Patients can rely too much on their AI therapist. For example, they can choose not to make important decisions alone without consulting the AI. In that case, the AI ​​can be programmed to identify and advise against over-dependence on the patient.

If there are insufficient safeguards, a patient can become involved in ineffective or even harmful AI therapy, without realizing that it is a problem. In this setting, a patient may be harmed by not seeking another type of therapy. This is also a possible event in human therapy.

Another set of questions relates to supervision. Would an AI therapist be subject to state oversight and require a license or malpractice insurance? Who oversees the AI ​​therapy, or is responsible if AI therapy stops working or goes wrong?

The AI ​​therapist could influence his patients based on his programming. Who would control the programming? A private company with its own prejudices? A national government? From which country? While it is true that a human therapist can also influence patients, one AI program could influence millions of people. This could influence world events too much. For example, the program could sow major political divisions.

It has been suggested that transparency regarding algorithms used for therapy would help address these concerns. However, in a machine learning environment, the algorithms used can become so complex that they are difficult to analyze, even if they could be fully explored.

An AI therapist trained through interactions with people in one culture may need to adjust their algorithms heavily when working with people from another culture, given the differences in cultural norms and ethicsas well as in their language and even non-verbal responses.

Finally, our rapid scientific and technological advances sometimes exceed our ability to learn how to use them wisely. For example, the widespread access to smartphone technology has greatly changed our patterns of behavior, especially among younger individuals. We are already aware that excessive use of electronics is associated with an increase in anxiety and depression† Other long-term consequences of smartphone use have yet to be defined.

We are thus reminded that a rollout of AI therapy should be undertaken slowly and deliberately, with the input of many thoughtful individuals, including in the fields of information technology, linguistics, clinical and research psychology, medicine, educationbusiness, government, ethics and philosophy

To take off

AI-delivered therapy has great potential benefits, but can also cause significant harm. Similar AI technology could also be used to transform other areas, such as education and financial advisory. Many of the pros and cons relevant to AI therapy also apply to these areas.

Leave a Comment

Your email address will not be published. Required fields are marked *