WASHINGTON (TNND) — California became the latest state to require social media companies to add warning labels to tell their users about its risks to mental health amid a national debate on how to confront its effects on the country’s youth and society.
Lawmakers in both parties have taken issue with the impact social media is having on the nation’s youth and have pushed the companies to do more to prevent it, concerns that have been ramped up with the growing capabilities of artificial intelligence.
California Gov. Gavin Newsom signed a law on Monday mandating warning labels for social media, joining a handful of other states that have enacted or are considering similar measures warning underage users about the negative effects of spending too much time online. The warning labels have broad support in both parties, with nearly all U.S. attorneys general endorsing Congress requiring them last year.
California’s bill will require social media platforms to show users under 18 warnings that social media “can have a profound risk of harm to the mental health and well-being of children and adolescents.”
They will be required to display a skippable warning for 10 seconds when a child opens the app for the first time each day along with a 30-second unskippable warning if they spend more than three hours on the site. Another 30-second warning will repeat after every additional hour of use.
The bill also forces AI developers to put disclosures and warnings on chatbots amid concerns about them having inappropriate discussions and relationships with children. Platforms will be required to disclose the interactions are artificially generated and share protocols on how to deal with self-harm to the state’s health department.
“Every model has system instructions that tells it how it should function and what it can and can’t reply to. But we’ve seen over time that it’s very easy to circumvent those system instructions,” said Adam Peruta, an associate professor and director of Syracuse University’s Advanced Media Management program. “When it comes to kids and teens, we need responsible design to be the standard, and I think this law is helping to push for better safety norms.”
Advocates for the regulation argued social media and AI companies engineer their platforms to keep children engaged through algorithmic recommendations, autoplay and frequent push notifications.
“We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale,” Newsom said in a statement.
The effort could run into lawsuits from the tech companies and trade groups that argue it violates free speech rights. NetChoice, an industry group, has filed lawsuits against several states that have enacted regulations targeting social media companies and said it was opposed to California’s new law.
“The government cannot compel speech by forcing businesses to display politicians’ preferred messages. That’s exactly what AB 56 does,” said Zach Lilly, NetChoice director of government affairs.
Similar legislation has been introduced in Congress by Sens. John Fetterman, D-Pa., and Katie Britt, R-Ala. Their Stop the Scroll Act would ask the Surgeon General to create a standardized mental health warning labels for social media platforms that would be enforced by the Federal Trade Commission. Under the proposal, it would appear as a pop-up every time a user opens the app or site and require them to acknowledge the warning before proceeding.
The warning labels were brought to the forefront by former Surgeon General Vivek Murthy, who advocated for a federal law requiring a label over concerns about children’s mental health.
It’s the latest step a state has taken to crack down on social media companies in absence of regulation from Congress, which has struggled for years to pass any comprehensive legislation addressing online issues despite broad bipartisan interest in doing so. Lawmakers have more recently directed their attention toward safety requirements to protect kids amid research finding negative effects for mental health associated with spending time online but have not yet gotten a bill signed into law.
Congress is also mulling how to regulate AI and whether it should enact restrictions on a quickly growing piece of the U.S. economy that many hope will revolutionize it into the future.
“We’re coming up on the three-year anniversary of ChatGPT being released and that’s what kickstarted this whole hype. It’s why we’re having it’s why we’re having this conversation. It’s why that bill was passed,” Peruta said. “Three years sounds like a long time, but it’s really not. We are still at the beginning of all this.”
In the interim, dozens of states have tried to take matters into their own hands, creating a patchwork of laws on data privacy, age verification and warning labels in a push to protect kids online. Many of the laws targeting social media companies are being challenged in the nation’s courts on free speech grounds, adding to the uneven rollout of the laws.