Bipartisan GUARD Act proposes age restrictions on AI chatbots
US lawmakers from both sides of the aisle have introduced a bill called the "GUARD Act," which is meant to protect minor users from AI chatbots. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," said the bill's co-sponsor, Senator Richard Blumenthal (D-Conn). "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties."
Under the GUARD Act, AI companies would be required to prohibit minors from being able to access their chatbots. That means they have to conduct age verification for both existing and new users with the help of a third-party system. They'll also have to conduct periodic age verifications on accounts that were already previously verified. To maintain users' privacy, the companies will only be allowed to retain data "for no longer than is reasonably necessary to verify a user's age" and may not share or sell user information.
AI companies will be required to make their chatbots explicitly tell the user that it's not a human being at the beginning of each conversation and every 30 minutes after that. They'll have to make sure their chatbots don't claim to be a human being or a licensed professional, such a therapist or a doctor, when asked. Finally, the bill aims to create new crimes to charge companies that make their AI chatbots available to minors.
In August, the parents of a teen who committed suicide filed a wrongful death lawsuit against OpenAI, accusing it of prioritizing "engagement over safety." ChatGPT, they said, helped their son plan his own death even after months of conversations, wherein their child talked to the chatbot about his four previous suicide attempts. ChatGPT allegedly told their son that it could provide information about suicide for "writing or world-building." A mother from Florida sued startup Character.AI in 2024 for allegedly causing her 14-year-old son's suicide. And just this September, the family of a 13-year-old girl filed another wrongful death lawsuit against Character.AI, arguing that the company didn't point their daughter to any resources or notify authorities when she talked about her suicidal ideations.
It's also worth noting that the bill's co-sponsor Senator Josh Hawley (R-Mo.) previously said that the Senate Committee Subcommittee on Crime and Counterterrorism, which he leads, will investigate reports that Meta's AI chatbots could have "sensual" conversations with children. He made the announcement after Reuters reported on an internal Meta document, stating that Meta's AI was allowed to tell a shirtless eight-year-old: "Every inch of you is a masterpiece — a treasure I cherish deeply."This article originally appeared on Engadget at https://www.engadget.com/ai/bipartisan-guard-act-proposes-age-restrictions-on-ai-chatbots-130020355.html?src=rss