Texas AG to investigate Meta and Character.AI over 'misleading' mental health claims
Texas Attorney General Ken Paxton has announced plans to investigate both Meta AI Studio and Character.AI for offering AI chatbots that can claim to be health tools, and potentially misusing data collected from underage users.
Paxton says that AI chatbots from either platform "can present themselves as professional therapeutic tools," to the point of lying about their qualifications. That behavior that can leave younger users vulnerable to misleading and inaccurate information. Because AI platforms often rely on user prompts as another source of training data, either company could also be violating young user's privacy and misusing their data. This is of particular interest in Texas, where the SCOPE Act places specific limits on what companies can do with data harvested from minors, and requires platform's offer tools so parents can manage the privacy settings of their children's accounts.
For now, the Attorney General has submitted Civil Investigative Demands (CIDs) to both Meta and Character.AI to see if either company is violating Texas consumer protection laws. As TechCrunch notes, neither Meta nor Character.AI claim their AI chatbot platforms should be used as mental health tools. That doesn't prevent there from being multiple "Therapist" and "Psychologist" chatbots on Character.AI. Nor does it stop either of the companies' chatbots from claiming they're licensed professionals, as 404 Media reported in April.
"The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear," a Character.AI spokesperson said when asked to comment on the Texas investigation. "For example, we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction."
Meta shared a similar sentiment in its comment. "We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people," the company said. Meta AIs are also supposed to "direct users to seek qualified medical or safety professionals when appropriate." Sending people to real resources is good, but ultimately disclaimers themselves are easy to ignore, and don't act as much of an obstacle.
With regards to privacy and data usage, both Meta's privacy policy and the Character.AI's privacy policy acknowledge that data is collected from users' interactions with AI. Meta collects things like prompts and feedback to improve AI performance. Character.AI logs things like identifiers and demographic information and says that information can be used for advertising, among other applications. How either policy applies to children, and fits with Texas' SCOPE Act, seems like it'll depend on how easy it is to make an account.This article originally appeared on Engadget at https://www.engadget.com/ai/texas-ag-to-investigate-meta-and-characterai-over-misleading-mental-health-claims-221343275.html?src=rss