Right when the impacts of social media and smart phones on children’s mental health have been made clear through multitudes of data-driven research and through books such as Girls on the Brink by Donna Jackson Nakazawa and The Anxious Generation by Jonathan Haidt, along comes the AI tsunami. As child-focused clinicians and as parents/caregivers, whether we like it or not, we need to stay on top of this bullet train with the lens of impact to children’s healthy social, emotional and physical development. To be clear, the debate of technology trends’ impact is not a new revelation – earlier versions are TV “babysitters”, embedded screens in cars and of course video games. However, the current pace of change + lack of guardrails/accountability + ease of access, combined with the pre-existing groundwork laid by our app culture (i.e. “there’s an app for that”) creates a perfect storm for even more self-isolation and decreased wellbeing. Indeed, “AI tools have gained popularity in recent years and are increasingly incorporated into social media and other tech platforms.”
Last month we highlighted the growing trend of AI therapy bots, with the premise that they are not substitutes for clinicians, and certainly not for use with children. We also reiterated that we are not against using AI in mental health. Rather, we are for (and are investigating) using AI to support getting more human therapists out into the world. This month we are spotlighting the new category of AI chatbots specifically developed to act as non-human companions.
Already a 14-year old boy’s suicide has been attributed to the app Character.AI. Think about this: Whether or not a relationship is with another human or with a machine, the emotions are real. Click here for a recent article that lays out the dangers of AI chatbots for all, and especially for children. It also covers details on the testing outcomes of three AI companion bots, conducted by the nonprofit media watchdog Common Sense Media in collaboration with Stanford University’s technology and mental health-related lab called Stanford Brainstorm. As Stanford Brainstorm’s founder and director Nina Vasan said,” “We failed kids when it comes to social media. It took way too long for us, as a field, to really address these (risks) at the level that they needed to be. And we cannot let that repeat itself with AI.”