Meta – Âé¶čŸ«Æ· America's Education News Source Thu, 16 Apr 2026 17:53:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Meta – Âé¶čŸ«Æ· 32 32 Child Advocate Envisions ‘Game-Changing’ Windfall From Social Media Settlements /article/child-advocate-envisions-game-changing-windfall-from-social-media-settlements/ Wed, 15 Apr 2026 14:30:00 +0000 /?post_type=article&p=1031158 A tidal wave of litigation aimed at social media platforms is drawing comparisons to the tobacco and opioid cases of recent decades, with observers predicting the companies that operate sites like Facebook, Instagram and TikTok could soon be paying out billions in court settlements.

As of last month, more than 2,400 claims were pending in the overseen by a judge in California. More than 10,000 individual cases and nearly 800 school district claims were pending across . And more than 40 have filed or joined lawsuits against the companies.


Get stories like this delivered straight to your inbox. Sign up for Âé¶čŸ«Æ· Newsletter


The lawsuits allege that the platforms were deliberately engineered to maximize addictive use among children and teenagers, and that the companies knowingly designed them to drive young people’s prolonged engagement, despite mounting evidence that they can be harmful.

Their recommendation algorithms amplify harmful material, the lawsuits allege. And weak age-verification systems have allowed underaged users broad access. The companies that run the platforms knew about these risks, they say, but failed to warn users and families about the mental health risks, often revealed by their own research. 

The companies have disputed these claims, maintaining that their platforms include safety tools. But one of them, Meta, which runs Facebook, acknowledged that the lawsuits could be costly, this year with the Securities and Exchange Commission warning investors that the lawsuits and mass arbitration demands could “significantly impact” its finances.

That could happen soon. In late March, two landmark verdicts came down within 24 hours of each other: On March 25, a Los Angeles jury found Meta and YouTube liable for designing that harmed a 20-year-old woman who started using them as a child. The jury awarded her $6 million. The companies that run Snapchat and TikTok were also named as defendants, but reached undisclosed settlements before trial.

A day earlier, a New Mexico jury for failing to protect kids from child exploitation and ordered it to pay $375 million for consumer-protection violations. “You add it all up and it could be hundreds of billions of dollars,” social psychologist Jonathan Haidt recently.

As with the tobacco and opioid lawsuits, one major question is emerging: Where will the money go? Will the proceeds benefit children, or will they end up simply padding states’ and school districts’ budgetary bottom lines?

To answer this, Âé¶čŸ«Æ· turned to Elizabeth Gaines, founder and CEO of the , a nonprofit focused on helping governments and communities figure out how to pay for programs and services that support children and youth. The organization doesn’t run the actual programs, but helps leaders and policymakers build sustainable funding systems to establish and keep these programs going, such as early childhood education, afterschool programs and mental health services.

It specializes in so-called “strategic public financing,” which pushes communities to examine how much they spend on children, how much they actually need and where they can find or generate more funding for, in Gaines’s words, “deeper investments” for kids. 

A lifelong child advocate, Gaines has worked in the field for 30 years, from think tanks to state-level advocacy. She founded the funding project eight years ago. “I looked around and realized that no one at the national level was really focusing on the public financing of child and youth development and programs and services writ large,” she said.

The project now tracks more than 300 federal, state and local funding streams that support children and youth from cradle to career.

Gaines began her career in her home state of Missouri, where she worked on ensuring one key goal: that proceeds from the 1998 benefited children. 

Spoiler alert: They didn’t. A budget shortfall prompted lawmakers to redirect millions from the settlement into the state’s general fund. Gaines now admits, “We did not get involved early enough in that process to really be able to shape the outcome of those dollars.”

Nearly 30 years later, with thousands of cases focused squarely on the harms of social media, she is working to build a coalition of groups that can persuade governors, lawmakers, educators and attorneys general to keep the focus on uplifting kids, not filling budget holes. The coming legal settlements, she said, are payback for that harm. “And [when] they pay back in, it needs to go back into the public good.”

Âé¶čŸ«Æ·â€™s Greg Toppo recently sat down with Gaines for a wide-ranging conversation on her work and the “game-changing” potential of the social media payouts.

The conversation has been edited for length and clarity. 

First let’s talk about the tobacco payouts. You were in Missouri at the time. As you look back on that now, what were the mistakes?

The tobacco settlement was the Wild West. I remember we had $20 million that the governor and the legislature had committed, and they were like, “Yes, we’re going to put that towards positive youth development — and it’s going to be huge!” Back then, that was big dollars for Missouri. And then there was a little budget crisis, and the governor withheld those dollars. It was like, “Oh, sorry.” But that was happening all over the country at that time. There are some states that did a good job of setting tobacco settlement dollars aside and having a real method to how they dispersed them, but for the most part, those dollars went into the general fund and never really tracked in any significant way. 

What about the opioid settlements?

They’ve gotten better, and I think we’ve been really guided by in the opioid settlement, which was like, “Here are the 10 things that these dollars are intended to be used for.” And so states have taken that, some of them more seriously than others, as the funds roll out. 

So let’s assign a number, a score of one through 10 to the tobacco settlements in terms of where the money went. 

Writ large, that’s so hard. I mean, anecdotally, because I haven’t done a full research study, my sense is probably three out of 10. States weren’t putting funds towards something that was prevention-oriented or reinvesting in the community. 

I want to make sure I understand what the Children’s Funding Project does. I know what your objective is. I wonder: Do you have any leverage, or is this all just advisory? Are you on the outside looking in, saying, “Hey, governors, you should do this”? Or do you have power to make these things happen?

We have no real hard power in this. What we are attempting to do is go around and get regulations in place. And what we really need is to then replace that with deep investments in young people. We’re narrowing in on the attorneys general who are going to be at the table, because they’re leads among the states in the suits, making sure that they understand that it’s not just a settlement to punish the companies and then wherever those dollars go, so be it. There’s a safety aspect that they’re working on: Protect young people, change the platforms, make sure we’re not having faulty products like we’ve had. 

But then there’s a youth justice component to this too.  And people just haven’t gotten there yet. So I think we’re just ahead, honestly, of where zeitgeist is on this, and when we do get it in front of people, they’re like, “Yes, that’s where the money needs to go.” We’ve got a former attorney general who’s advising us on how the A.G. world works because it’s new to our field.

So no, we are not involved in the suits directly, and that’s intentional. That’s somebody else’s job. Our job, as the people who care about funding for kids, is to focus on making sure that money goes to the right place.

So let’s talk about that. What are the things we should be buying with it?

Spokane, Washington, has this thing that they started out of their schools called . Basically what this buys is some joy and some curiosity and some investment in things for a group of young people that have kind of had a rough decade. It’s not been awesome to be a young person. And so this is to say, “You want to learn to play the trumpet? You want to learn to code? You want to learn to build trails out in the woods? Whatever it is that’s speaking to you as a young person, we want to put an infrastructure in place that actually allows you to go and explore that.” And so they’re doing it in Spokane, which is great. 

You talk about a “rough decade” for young people. It sounds like you want to offer them opportunities that maybe even their predecessors, their parents, didn’t have. 

I will just say this: There are special cases where young people have gotten to do it. I ran youth programs 30 years ago. The program that I ran became one of the very first . And there are people who I hired that work there to this day who are incredibly talented youth workers, who now for generations have been saving young people and really helping them find their calling. And to have young people in relationships with talented adults who know how to do that — and other talented young people, what we call “near peers,” who can provide that kind of guidance — is something we’ve never had.

21st Century Community Learning Centers have basically been since years ago. It’s kind of crazy to think that we’ve never invested more than a billion dollars in that as a country.

If you went to an attorney general, and they said, “Just lay it all out for me,” what are the possibilities? Obviously there is a constituency that will say, “Just give the schools more money.” Just make classes smaller, improve buildings, put in HVAC, and on and on and on. Pump money into the system. Do you find yourselves in opposition to that?

They should, just as a matter of course, pay for HVAC systems for our schools, and so using this special pot of money to do that is not going to have the intended impact that we want to have. And given the direct link between the harms related to youth mental health and what we know about investing in prevention and upstream opportunities,  this is a chance to really make a significant investment in those kinds of things. So the coalition that we’re building is really trying to bring together the people not only that are in the comprehensive afterschool funding community, but into sports and play, outdoor education, arts, civic engagement leadership, youth in service types of activities. There’s a bunch of stuff that’s always been underfunded. This is a chance to quadruple down on it.

Quadruple down? 

Well, I’ll just go ahead and admit it’s going to be more than that. 

You’re talking about the harms of social media, and obviously thinking of ways to to remedy that. Who are the people you would work with to make some of these things happen?  

Let me be clear: We are really trying to coalesce any organization. Largely they’re community based organizations. There’s the names that people know: the Boys and Girls Clubs and the Y’s, but there’s also really organically grown, community based organizations that, in many cases, are the most effective at reaching young people that are hard to reach in the places where they are. And those folks have always just done that work and found ways to do it, but have been deeply underfunded. And so if I was the boss of social media litigation, I would say investing in those homegrown, local organizations would be a really powerful thing to do. We’re trying to bring all of those folks to one table, and then there’s going to have to be state-by-state approaches. 

Could you envision a landscape where offering funding to public schools is in opposition to the Girl Scouts or the Campfire Girls or Boys and Girls Clubs?  

Certainly that could happen. But our intent is to make this like what they’ve done in Spokane. That’s superintendent-led. I think the schools are going to have to get that this is an opportunity to really do some things that they get pressured to focus on, when really they have a job to do already and they can’t seem to layer the social-emotional well-being of young people on top of what they’re trying to do. 

I was talking to somebody today about something totally unrelated, and they used the term “human flourishing.” That sounds kind of like what you’re talking about.

Yeah. I mean, listen: With the onset of AI and the way that the world is shifting, I think there’s going to be a huge need that becomes so clear for folks about just the value of being human and how we raise good humans. It’s going to become increasingly important.

You talked about attorneys general. Are there any who you feel are leading the way?

You probably saw [New Mexico Attorney General] , the New Mexico case, which is not actually part of the larger case, . [A jury in March found that Meta had failed to protect young users from child predators on Instagram and Facebook.] It was the first one at the state level out of the gate that has gotten an early verdict. And it was pretty powerful. He was only looking at one set of harms related to child exploitation. So just on that one harm alone, they said was owed. And then if you extrapolate that to all the other harms, it’s significant. Certainly Kentucky’s, A.G., Colorado’s, California’s. But it’s a truly bipartisan group of A.G.s that are leading on this. 

These strike me as not just life-changing numbers but just system-changing numbers. I mean this has a potential to really just change how we even consider what’s possible. 

That’s the point, and that’s, I think, why people get very excited about this as a solution, as a chance to really dream and to get young people excited and engaged. “Game-changing” is how we’ve been describing it to the field. And we’ve got to stop thinking about just like, “Let’s fight for those little afterschool dollars that we’ve had all this time.” No, this is about a bigger play.

]]>
Gen Z Increasingly Skeptical of — And Angry About — Artificial Intelligence /article/gen-z-increasingly-skeptical-of-and-angry-about-artificial-intelligence/ Thu, 09 Apr 2026 04:01:00 +0000 /?post_type=article&p=1030884 While some might envision Gen Z welcoming artificial intelligence into their lives, a new Gallup survey finds people between the ages of 14 and 29 are becoming increasingly skeptical of — and downright mad at — AI.

Compared to a , they’re less excited and hopeful about the change it could bring and more angry at its existence, citing concerns about AI’s impact on their cognitive abilities and professional opportunities.

Respondents said they used AI at nearly the same rate they did before — they reported only a slight increase in daily and weekly exposure — but when asked how it makes them feel, the answers revealed growing misgivings. 

Thirty-one percent said it made them angry, up 9 percentage points from 2025. And just 22% said it made them feel excited, down 14 percentage points from last year. Only 18% of respondents said it made them feel hopeful, marking a nine-point drop. Forty-two percent said it made them feel anxious, roughly the same as last year. 

Zach Hrynowski, senior education researcher at Gallup, said the switch was swift. 

“One of my working theories is that (it’s) the high schoolers, who are in their senior year, or especially those college students, who are maybe thinking, ‘AI is taking my job. I just went to college for four years: I spent all this money and now it’s turning my industry upside down,” he said. 

Only 46% of respondents believed AI would help them learn faster, down from 53% the prior year, Gallup found. Fifty-six percent of respondents said it would help them to expedite their work compared to 66% last year. 

Hrynowski notes, too, that users’ unease wasn’t entirely tied to the amount of time they spend engaging with AI. 

“Year over year, among that super user group, they’re much less excited, they are much less hopeful — and they are more angry,” he said. “So this is not a case of some people who are adopting it and loving it and some people who are just avoiding it and feel negatively about it.”

Nearly half of respondents said the risk of the technology outweighs the benefits in the workforce. Just 37% believed it would help them find accurate information, down from 43% the prior year and only 31% believed it would help them come up with new ideas compared to 42% in 2025. 

The survey also notes some disparities by age and race. For example, older Gen Zers are more likely than younger ones to voice concerns about AI’s impact on learning in general. 

Asked how likely is it that AI designed to mainly complete tasks faster will make learning more difficult in the future, 74% of K-12 respondents said it was “very likely” or “somewhat likely” compared to 83% of Gen Z adults who said the same. Men and Black respondents were also less concerned about learning impact than their peers overall.

Results are based on a survey of 1,572 people spread throughout every state and Washington, D.C., conducted between Feb. 24 and March 4, 2026. It was commissioned by the Walton Family Foundation and , Global Silicon Valley. Together, Walton Family Foundation and Gallup are conducting ongoing research into Gen Z’s attitudes toward AI.

Hrynowski believes there might be a link between recent revelations about the harmful nature of social media and AI-related distrust: Many of the respondents came of age, he notes, just as former surgeon general Vivek H. Murthy called for a about its use. 

shapes the user experience in social media. Just last month, a California jury found social media company Meta — owner of Facebook, Instagram, WhatsApp, Messenger and Threads — and YouTube injured a young woman’s mental health by design in that could encourage untold others. 

This was the second of two critical decisions: Just a day earlier, a New Mexico jury found Meta — and hid what it knew about child sexual exploitation on its platforms.

“I’ve always been very impressed from the start of this work with Gen Z that across the board, not just with AI, they are keenly aware of the risks of technology, whether it’s social media, whether it’s AI or screen time,” Hrynowski said. 

They are not the only generation to harbor these worries. A growing number of parents of K-12 students are pushing back on their screen time, not just , but  

Despite respondents’ skepticism about AI, they’re also readily aware that the technology won’t be walked back: 52% acknowledge that they will need to know how to use AI if they go to college or take classes after high school, while 48% think they will need to know how to use AI in the workplace.

An earlier Gallup study, released just last week, shows 42% of bachelor’s degree students have reconsidered their major because of AI.

Gen Z, in its reluctant acceptance of the technology, wants help in how to navigate it, both in an academic setting and in the workplace. Schools are stepping up, the survey revealed: The share of K-12 students who say their school has AI rules moved from 51% in 2025 to 74% this year. 

Disclosure: Walton Family Foundation provides financial support to Âé¶čŸ«Æ·.

]]>
Meta and YouTube Ordered to Pay $3M to Young Woman in Social Media Addiction Trial /article/meta-and-youtube-ordered-to-pay-3m-to-young-woman-in-social-media-addiction-trial/ Fri, 27 Mar 2026 16:30:00 +0000 /?post_type=article&p=1030429 This article was originally published in

After nine days of deliberation, a Los Angeles jury found Google and Meta liable for harms stemming from the design of their social media products on Wednesday and ordered them to pay $3 million in compensatory damages to a plaintiff who said that Instagram and YouTube caused depression, body dysmorphia and suicidal thoughts.

Meta was 70 percent of damages and YouTube the rest. The amount owed the plaintiff may rise, and the jury will over potential punitive damages for egregious conduct, per The New York Times.

This is the tackling the legal question of whether features of social media, like autoplay, infinite scroll and beauty filters can cause harm to users.

“This momentous verdict shows that tech companies will be held accountable for the harm they cause. These companies have spent years choosing profit over people’s well-being, and now a jury has decided they must pay the price for their actions,” said Maddy Batt, a legal fellow at Tech Justice Project, a law firm specializing in suits against AI chatbots.

The plaintiff, KGM, filed her lawsuit using a pseudonym in 2023. KGM, now 20, says she has been addicted to social media since she was a child. It was one of three cases selected out of thousands as “bellwether trials” to test out a new theory of liability.

Batt cautioned that the outcome of this trial doesn’t mean “an automatic legal win” for the thousands of pending cases, as determining causation varies greatly given the circumstances. “Each individual plaintiff still does have to show, if they go to trial, that any negative mental health outcomes they personally experienced were linked to social media,” she said.

It is a huge boon to tech accountability advocates to see this success though, Batt said, and could lead to tech companies changing their products because of the amount of money in play to settle cases or pay damages. This jury decision, coupled with a $375 million verdict against Meta announced yesterday, is the first step to achieving that goal.

The New Mexico Attorney General RaĂșl Torrez sued Meta in 2023, alleging the company misled constituents over how safe its platforms are for children. State prosecutors focused specifically on Instagram’s potential to facilitate the sexual exploitation of kids.

On Tuesday, a jury sided with New Mexico, saying the company also engaged in deceptive trade practices. Meta was ordered to pay $5,000 per violation — $375 million total. Torrez at a future bench trial, and hopes to compel changes to the platform. Meta said it plans to appeal.

Batt pointed out that this trial is the first time tech leaders like Mark Zuckerberg have had to make a case and submit to questioning in front of a jury of their peers. (The CEO did not take the stand in the New Mexico case.) Large tech companies have faced a public backlash over the past decade, and much of it has revolved around their products’ impact on the mental health of young people.

Frances Haugen, a whistleblower, leaked internal research documents from the company previously known as Facebook showing girls reported their eating disorders worsening after using Instagram. Social media use can prompt girls to compare and criticize their own bodies, and many companies struggle to moderate on their platforms.

Over two-thirds of teenage girls reported using Instagram, more than boys. A quarter each of Black and Latinx teens said they use Instagram and YouTube “constantly” according to a by Pew Research Center.

Google argued that YouTube was not social media, while Meta of KGM’s anxiety, depression and body dysmorphia. Meta’s lawyers deconstructed KGM’s home environment, alleging her parents’ divorce and treatment by her mother were the root cause of her emotional pain. The companies also argued that it wasn’t the way their products were designed that caused problems, but rather the specific content seen.

KGM originally named the companies behind Snapchat and Tiktok in the lawsuit, but those parties settled for an undisclosed sum before the trial started. The trial focused on Instagram and Facebook, both Meta products, and YouTube, which is owned by Google.

The burden was on KGM’s lawyers to prove that Meta and Google were negligent in their design of social media products and show that those same products caused the plaintiff’s mental health issues. The jury agreed with those arguments.

KGM testified that features like notifications , and she was unable to stop whenever she tried to limit her usage. She said she started her first Instagram account at age 9 and joined YouTube at age 10, even though legally kids aren’t supposed to have online accounts before they’re 13. Almost all of her Instagram posts had image filters on them, and KGM said she didn’t feel bad about her body until she began using the platform.

The tech accountability watchdogs who rallied behind KGM are ecstatic over this win. “The era of Big Tech invincibility is over,” said Sacha Haworth, executive director of The Tech Oversight Project, in a statement.

For parents who have lost their kids to what many describe as social media-related harms, this is a moment of vindication.

“For years, families have been told this was a parenting issue, but the jury saw the truth: these companies made deliberate decisions to prioritize growth and profit over kids’ safety,” said Shelby Knox, director of online safety campaigns at nonprofit ParentsTogether.

Social media companies have been battling allegations of harm, particularly to kids, for years. Most of the claims are easily dismissed under Section 230, the law that says a platform isn’t held liable for third-party content it hosts. But these bellwether cases are testing whether the design of products like YouTube, Facebook and Instagram are inherently harmful. Plaintiffs have pointed to the impacts of features such as infinite scroll and face filters as harmful regardless of the content being shared.

The case concludes as Congress works to pass a package of internet bills that is but that critics say may lead to the removal of digital and — a particular concern given the Trump administration’s policy positions.

In her statement, Haworth at The Tech Oversight Project called on lawmakers to pass the Kids Online Safety Act, one of the most hotly debated pieces of tech legislation in recent years. It has failed to pass the House since its first was introduced in 2022, but now is being considered as part of the aforementioned package.

“It’s good that people are suing these companies and winning in court to reduce their power and force them to change their policies,” said Evan Greer, director of digital rights nonprofit Fight For The Future, to The 19th. But she’s concerned how the verdict in KGM’s case will be used to advocate for laws that she says could threaten free speech online.

Greer pointed to the way activists are using social platforms to monitor abuses by Immigration and Customs Enforcement, advocate for human rights and discuss accustations of sexual abuse against people like Jeffery Epstein. “We need policies that address corporate abuse without kneecapping the ability of front-line activists to use social media to change the world,” she said.

Jess Miers, associate professor of law at the University of Akron School of Law, is concerned about the long-term consequences of the verdict. While these cases focus on the way platforms are designed, said in practice, there isn’t a strong delineation between content and feature design.

“Autoplay is only engaging because of what it plays,” she told The 19th. “Infinite scroll only retains users because of what it surfaces.” She pointed out many apps use these kinds of features, but those aren’t the ones being sued.

Thus, liability tied to design will inevitably trickle down to judgements about content. “The only practical way to reduce the risks alleged in these suits is to restrict or suppress categories of content that might later be characterized as harmful or ‘addictive,’” she noted.

And what’s the content most likely to be labeled as harmful? “History shows they expand to cover disfavored speech—whether that’s reproductive health information, gender-affirming care, or speech about policing and immigration enforcement,” she said.

“The people most likely to be affected are those who already rely on the Internet as a primary space for connection and support,” Miers said — like disabled people, LGBTQ+ youth or people looking for accurate information on contraception.

was originally reported by Jasmine Mithani of . .

]]>
Bernstein: ‘There’s a Window of Opportunity to Create Change’ in AI Chatbots /article/bernstein-theres-a-window-of-opportunity-to-create-change-in-ai-chatbots/ Tue, 18 Nov 2025 13:30:00 +0000 /?post_type=article&p=1023580 The chatbot developer has said it will ban users under 18 years old from using its virtual companions, an unprecedented move that comes after the mother of a 14-year-old user sued the company in last year, saying the boy talked to a Character.AI chatbot almost constantly in the months before he killed himself in February 2024. 

The “dangerous and untested” chatbot, the mother said, “abused and preyed on my son, manipulating him into taking his own life.” It essentially assisted his suicide, the mother alleges, prompting him to isolate from friends and family and at one point even asking if he had a suicide plan, according to the lawsuit.


Get stories like this delivered straight to your inbox. Sign up for Âé¶čŸ«Æ· Newsletter


In its Oct. 29 , the company said the change will go into effect no later than Nov. 25. Character.AI will limit teen users to two hours per day with chatbots before then, ramping it down in the coming weeks.

It also said it will establish its own AI Safety Lab, an independent non-profit “dedicated to innovating safety alignment for next-generation AI entertainment features.”

To offer perspective on the move and on issues surrounding AI safety, privacy and digital addiction, Âé¶čŸ«Æ·â€™s Greg Toppo spoke with , a Seton Hall University law professor and director of its . Bernstein has also created a school outreach program for students and parents, introducing many for the first time to the idea of “technology overuse.” 

An intellectual property lawyer, Bernstein noticed around 2015 or 2016 that “things were changing around me” when it came to technology. “I had three small kids, and I realized that I would go to birthday parties — the kids are not talking to each other. They’re looking at their phones! I’d go to see school plays, and I couldn’t see my kids on the stage because everybody was holding their phones in front of them.”

Likewise, she felt less productive “because I was constantly texting and emailing instead of focusing.”

But it wasn’t until whistleblowers began revealing the hidden designs behind so many social media tools that Bernstein considered how she could help herself and others limit their use.

In 2021, the whistleblower , the primary source for The Wall Street Journal’s series, told congressional lawmakers that her employer’s products “harm children, stoke division, and weaken our democracy.” Creating better, safer social media was possible, Haugen said, but Facebook “is clearly not going to do so on its own.”

In her testimony, Haugen zeroed in on the social media giant’s algorithm and designs. In her writing and speaking, Bernstein maintains that tech companies like Facebook — rebranded as Meta — manipulate us to keep us online as long as possible, with invisible designs that “target our deepest human vulnerabilities.” For instance, they use a tool called , prominently on display on Facebook and Instagram, in which the page never ends. “We just keep scrolling,” she wrote recently. “They took away our stopping cues.”

Similarly, video apps such as YouTube and TikTok rely on , in which one video automatically follows another indefinitely.

In 2023, Bernstein put her findings into a book, . Since then, dozens of state attorneys general and school districts have sued to force social media companies to reform — and Bernstein says this approach may also help parents and schools battle the growing threat of AI companion bots. 

Late last month, a bipartisan group of U.S. senators to make AI companions off-limits to minors. Sen. Josh Hawley, R-Mo, a co-sponsor, said more than 70% of kids now use them. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide,” he wrote. “We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.”

The move comes weeks after the said it was investigating seven chatbot developers, saying it was looking into “how these firms measure, test and monitor potentially negative impacts of this technology on children and teens.”

In her conversation with Âé¶čŸ«Æ·, Bernstein said the FTC probe amounts to “another pressure point” that may help change how tech companies operate. “But it’s not just the FTC. It’s the lawsuits, and it’s bad PR that comes from the lawsuits, and hopefully there’ll be regulation. Litigation is expensive. Investors might not want to invest in these new products because there’s risk.”

This conversation has been edited for clarity and length.

The obvious interest we have in this is that we’re seeing Character.AI’s new policy, which limits access to its chatbot companions to users 18 or older. I imagine folks like you would say it’s only the first step.

Just the fact that they are taking some precautions means hopefully some kids will not be exposed to what’s been happening — convincing them to kill themselves, convincing them to not talk to their parents, to stay away from their friends. That’s a good thing. 

On the other hand?

I’ve researched how tech companies, especially Meta and other companies, have been behaving for years. So I’m a bit suspicious, because we tend to see these kinds of moves when they’re threatened legally. So it’s not so surprising that it’s happening. They’re under pressure.

In my mind, there are two questions: First of all, what will this look like exactly? In the past, for example, you would see Meta, every time there’s a big privacy breach, they would apologize and say, “We’re fixing it,” and they’ll fix something small and not fix the big thing. So what are they really doing? What kind of age verification mechanisms are they going to use? Secondly, they said they’re creating some space for teens. What is this going to look like? We don’t know. And I believe that until there’s real regulation at stake, we can’t be sure that they will take real precautions. 

I read a earlier this year in which you used the phrase “collective legal action,” saying that this is what’s needed to exert pressure on tech companies to change their designs, which trap users into “overuse.” That’s a fairly recent development, correct?

At the beginning, the people who were writing on this were mostly psychologists. Parents thought it was their own fault. The idea was, “Let me just fix my habits.” It’s self-help. The books that came before me were mostly talking about self-help methods. And when I was thinking about collective action, I realized: Parents can’t really change things by themselves, because you can’t isolate your kid and not give them a cell phone, not give them social media. It becomes an endless fight. And so I thought this has to be changed through collective action, through pressure — through governmental pressure, litigation. 

Jonathan Haidt’s book talks about collective action through parents doing things together in order to not have your kid be the only one who does not have social media or a phone. The idea is that it’s not our fault. It has to be done differently.

And to your point, a lot of this is by design, whether it’s social media or games or AI companions. By design, they’re meant to keep you there, keep you in place, keep you engaged. That’s something that, until recently, was not on a lot of people’s radar.

It took after to come out and explain how it works, to understand it as a business model. There’s no accident. We’re getting these products for free: Gmail for free, Facebook for free. We are paying with our time and our data. They collect data on us in order to target advertising — that’s how they make money. And they need us online for as long as possible so they can collect the data — and also so we will see the ads. So they need to find ways to keep us online. And there are different mechanisms like the infinite scroll. And they come up with new ones. AI companions have new addictive mechanisms: the way that they , they always flatter you. For kids it’s even more addictive, but even for adults it’s, “You’re always doing a great job.”

It’s meant to keep you talking, meant to keep you engaged. You focus a lot on games and social media, but it strikes me that AI companions make those things seem quaint in terms of their addictive qualities, or the potential for real peril.

I agree with you. If you have a spectrum where social media is addictive — people spend many hours online, and they’re not interacting face-to-face — that’s an issue. And you see this with AI companions too. But what’s concerning about AI companions is that it’s much worse for kids. If you think about it, if you’re a kid and you go to middle school, kids are not nice. It’s much nicer to chat with somebody who’s always nice to you. Falling in love and getting your heart broken is not fun. There are many websites that just offer girlfriends that cater to you. So for me, the scariest thing is that kids will just never really develop the skills to have these relationships. And some adults may also stop preferring them.

About a year ago, I wrote a piece in which I talked to a college student, maybe 19 or 20 years old, who admitted that essentially he had outsourced advice about his romantic life to ChatGPT — he had a girlfriend, and whenever they had a fight or disagreement, he would excuse himself, go into the bathroom and ask ChatGPT what he should be doing. I can see that both ways: On the one hand, it just seems incredible. On the other hand, I can see where he’s basically looking for good advice. He’s looking for guidance. What do you make of that?

People say you can get advice, and you can practice your dating skills. I’ll give you something that happened to me, which is on a different scale: I was traveling abroad, and I was in this restaurant, and the menu was in a different language. So what did I do? I took a picture of the menu and uploaded it to ChatGPT and got it translated to English. While I was doing it, a young man came up to my partner and asked to translate. So what happened? I was already busy looking at my phone because I had a translation. My partner was speaking to this young man who was very happy to speak, and they were having a great conversation. 

That’s an example of the kind of things we’re giving up. This guy you wrote about, instead of going to the bathroom, maybe could have asked a friend, developed a deeper relationship with a friend. Maybe they would share experiences. But he gets used to getting the immediate answer from somebody else, and you didn’t develop these relationships. 

We miss out on the possibility of having a human interaction. 

Yes.

In its announcement, Character.AI actually apologized to its younger users, saying that many of them had told the company how important these characters had become to them. And I’ve heard that before. I wonder: How do we as adults start to think about the flip side of this, that it’s difficult for young people to tear themselves away from these things they’ve created? Do you have any sympathy for that?

I have concern, actually, because these kids, sometimes they kill themselves for these bots. So I am concerned about what will happen to kids who are very attached when these bots are suddenly gone. And you hear news stories even of adults who suddenly lost characters they were attached to. It’s a bit like how do you get people who are addicted off the addiction when you suddenly cut them off? These are things we’ve never even thought of.

Is there anything I haven’t asked you that you think is an important piece of this?

An important piece of this is that you don’t yet have every teen, every kid, attached to an AI companion. So there’s a window of opportunity to create change. Social media is much more difficult, because by the time we realized how bad it was, everybody was on social media. The money interests were so big that they would fight every law in court. So it’s really important to move fast and also understand that Character.AI is a small part of the problem. Because it’s not just these specialized websites like Character.AI. It’s ChatGPT — one of the last lawsuits was . The AI bots in ChatGPT are becoming more human, so it’s important that any action is against these bots, against the type of characteristics they have and to regulate how they behave. Just getting rid of Character.AI is not going to solve the problem.

]]>
Safety or Censorship: Congress Rushes to Pass Broad Child Online Protection Laws /article/safety-or-censorship-congress-rushes-to-pass-broad-child-online-protection-laws/ Wed, 08 May 2024 18:23:57 +0000 /?post_type=article&p=726669 As Washington lawmakers scramble this week to finalize their last significant legislation before the fall presidential election — a must-pass bill to reauthorize the Federal Aviation Administration — they’ve tacked on more than a dozen unrelated amendments, including three online safety bills affecting students. 

Taken together, the trio would create sweeping restrictions on children’s access to social media, impose new requirements on social media companies to ensure their products aren’t harmful to youth mental health and bolster educators’ digital surveillance obligations to ensure kids aren’t swiping through their favorite feeds in class. 

The three separate digital safety bills have bipartisan support and lawmakers could greenlight them as part of the FAA reauthorization legislation, which faces a Friday deadline. If passed, the legislative package could potentially end years of debate on these thorny questions and would mark the most consequential effort to regulate tech companies and children’s online safety in decades.


Get stories like this delivered straight to your inbox. Sign up for Âé¶čŸ«Æ· Newsletter


“Parents know there’s no good reason for a child to be doom-scrolling or binge-watching reels that glorify unhealthy lifestyles,” Sen. Ted Cruz, a Texas Republican who is co-sponsoring The Kids Off Social Media Act, said in . “Young students should have their eyes on the board, not their phones.” 

The move comes as lawmakers across the political spectrum sound an alarm over concerns that teens’ addiction to their social media feeds — complete with algorithms designed to keep them hooked and coming back for more — have exacerbated mental health issues in young people. It follows congressional testimony by of knowing that apps like Instagram inflamed body image issues and other negative triggers among youth but failed to act to mitigate the harm while upholding a “see no evil, hear no evil” culture.

The controversial and heavily debated bills saw new life in January after social media executives were grilled during a contentious congressional hearing and Meta CEO Mark Zuckerberg apologized to parents who said their children were damaged, and in some cases died, after the company’s algorithms fed them a barrage of pernicious content. 

But critics contend the provisions amount to heavy-handed and unconstitutional censorship that fails to confront the root cause of young people’s anguish — and in some cases could hurt them by limiting their access to educational materials, blocking information designed to help them deal with mental health issues or by subjecting them to greater online surveillance.

Meta CEO Mark Zuckerberg apologizes during a January Senate committee hearing to families who say their children suffered emotional anguish, and in some cases died, as a result of their social media use. (Tom Williams/CQ-Roll Call, Inc via Getty Images)

The three amendments are:

  • The Kids Online Safety Act would require tech companies to “exercise reasonable care” to ensure their services don’t surface in children’s feeds material deemed harmful, including posts that promote suicide, eating disorders and sexual exploitation.

    First introduced in 2022, the legislation would also require tools that would give parents greater ability to monitor their children’s’ online activities and mandate tech companies enable their most restrictive privacy settings for their youngest users by default. 
  • The Children and Teens’ Online Privacy Protection Act, also known as COPPA 2.0, amends a 1998 law that requires tech companies receive parental consent before collecting data about children under 13 years old. COPPA 2.0 would extend existing requirements to children under 16, ban targeted advertising for children and require tech companies to delete data collected about children upon parental request. 
  • The Kids Off Social Media Act, introduced last week by Cruz and Hawaii Democratic Sen. Brian Schatz, would prohibit children under 13 years old from creating social media accounts and restrict tech companies from using algorithms to serve content to children under 17. It would also require schools that receive federal internet connectivity funding to block students’ access to social media sites on campus networks. 

The bill’s provisions have faced widespread pushback from digital rights and privacy advocates, including the nonprofit Electronic Frontier Foundation, which called it an unconstitutional infringement that “replaces parents’ choices about what their children can do online with a government-mandated prohibition.” 


On Tuesday, TikTok and its Chinese parent company that bans the popular social media app in the U.S. unless it sells the platform to an approved buyer, accusing the government of stifling free speech and unfairly singling it out based on unfounded accusations it poses a national security threat.

In March, — including Louisiana, Arkansas, Texas and Utah — to impose new parental consent requirements for children to create social media accounts. The Georgia law also bans social media use on school devices and creates age verification requirements for porn websites.

Aliya Bhatia (Center for Democracy & Technology)

Aliya Bhatia, a policy analyst at the nonprofit Center for Democracy and Technology, said that each bill now included in the FAA reauthorization act has been the subject of debate and opposition. Including them in unrelated, must-pass legislation with a short deadline, she said, “undermines the active conversations that are happening” about the bills, which she said are “just not ready for prime time.”

The Kids Online Safety Act, which has the bipartisan , is endorsed by a host of , including the American Psychological Association, Common Sense Media and the American Academy of Pediatrics, who argue the rules could protect youth from the corrosive effects of social media. 

At the same time, the legislation, which has differing House and Senate versions, has also received and those representing LGBTQ+ students. The groups argue the bill amounts to government censorship with a likely disparate impact on LGBTQ+ youth and students of color. The Heritage Foundation, a conservative think tank, has endorsed the legislation as a way to restrict youth access to LGBTQ+ content, that “keeping trans content away from children is protecting kids.” 

Privacy advocates have warned the legislation could result in age-verification requirements across the internet that could require online users of all ages to provide identifying information to web platforms. 

Meanwhile, social media’s effects on youth mental well-being remain the subject of research and debate. In last year, the American Psychological Association noted that while social media use “is not inherently beneficial or harmful to young people,” the platforms should not surface to their young users content that encourages them to engage in risky behaviors or is discriminatory. 

In , Surgeon General Vivek Murthy noted that social media use is nearly universal among young people, with more than a third of teens saying they use the apps “almost constantly.” While its impact on youth mental health isn’t fully understood, Murphy said, emerging research suggests that its use can be harmful — perpetuating a national youth mental health crisis “that we must urgently address.” 

The Kids off Social Media Act, which would prohibit youth access to sites like Instagram, is that requires schools and libraries to monitor and filter youth internet use as a condition of receiving federal E-Rate internet connectivity funding. In response, schools nationwide have adopted digital surveillance tools that use algorithms to sift through billions of student communications to identify problematic online behaviors.

Meanwhile, a recent found that web filters regularly used in schools do more than keep kids from goofing off in class. They also routinely limit students’ access to homework materials, educationally appropriate information about sexual and reproductive health and resources designed to prevent youth suicides. 

For years, privacy advocates have called on the Federal Communications Commission to clarify how the rules apply to the modern internet and have argued that schools’ tech-driven monitoring efforts go far beyond their original intent. 

When the law went into effect in 2001, monitoring “quite literally meant looking over a kid’s shoulder as they used the computer,” said Kristin Woelfel, a policy counsel of the Center for Democracy and Technology, but in 2024 student monitoring has become “a very specific term that now means really pervasive and technical surveillance.” 

of students, parents and teachers last year, the nonprofit found a majority supported digital activity monitoring in schools yet nearly three-quarters of youth said that filtering and blocking technology made it more difficult to complete some homework, a challenge reported more often among LGBTQ+ students, and that the tools routinely led to disciplinary actions and police involvement. 

“They don’t work as people think they do,” she said. “That, coupled with data that shows it’s actually detrimental to students, indicates even more that this is not the right path forward.” 

In a letter to lawmakers last week, a coalition of education nonprofits including the American Library Association and the Consortium for School Networking expressed concern about attaching social media limitations to E-Rate funding, which schools rely on to facilitate learning. 

“Schools and libraries will face delays or denials of E-rate funding due to allegations of non-compliance,” the groups wrote, arguing that it would give federal authorities control over social media policies that should be left to local officials. “The bill’s provisions seem to suggest that technology-driven learning models are always harmful, even when carefully crafted to promote educational purposes. In fact, there are several social media uses that can be beneficial for education and learning.”

Sen. Ted Cruz, a Republican of Texas, questions Meta CEO Mark Zuckerberg during a January Senate committee hearing about child sexual exploitation on the internet. (Tom Williams/CQ-Roll Call, Inc via Getty Images)

In a announcing the legislation, Schatz offered the opposite perspective.

“There is no good reason for a nine-year-old to be on Instagram or TikTok,” he said. “There just isn’t. The growing evidence is clear: social media is making kids more depressed, more anxious, and more suicidal.”

In justifying the legislation, Schatz cites reporting by the psychologist and author Jonathan Haidt, who argues in his new book that young people — and girls, in particular — face a “tidal wave” of anguish that can be traced back to the rise of smartphones. 

Haidt’s characterization of tech’s role in youth well-being has , including by developmental psychologist Candice Odgers, who argued in that claims “that digital technologies are rewiring our children’s brains and causing an epidemic of mental illness is not supported by science.” 

Among the evidence is on the well-being of nearly 1 million people ages 13 to 34 and 35 and over as it was being adopted in 72 countries and found “no evidence suggesting that the global penetration of social media is associated with widespread psychological harm.”

]]>
Lawmakers Duel With Tech Execs on Social Media Harms to Youth Mental Health /article/senate-grills-tech-ceos-on-social-media-harms/ Wed, 31 Jan 2024 23:20:00 +0000 /?post_type=article&p=721450 During a hostile Senate hearing Wednesday that sometimes devolved into bickering, lawmakers from across the political spectrum accused social media companies of failing to protect young people online and pushed rules that would hold Big Tech accountable for youth suicides and child sexual exploitation. 

The Senate Judiciary Committee hearing in Washington, D.C., was the latest act in a bipartisan effort to bolster federal regulations on social media platforms like Instagram and TikTok amid a growing chorus of parents and adolescent mental health experts warning the services have harmed youth well-being and, in some cases, pushed them to suicide. 

In an unprecedented moment, Meta founder and CEO Mark Zuckerberg, at the urging of Missouri Republican Sen. Josh Hawley, stood up and turned around to face the audience, apologizing to the parents in attendance who said their children were damaged — and in some cases, died — because of his company’s algorithms. 


Get stories like this delivered straight to your inbox. Sign up for Âé¶čŸ«Æ· Newsletter


“I’m sorry for everything you’ve all gone through,” said Zuckerberg, whose company owns Facebook and Instagram. “It’s terrible. No one should have to go through the things that your families have suffered.”

Senators argued the companies — and tech executives themselves — should be held legally responsible for instances of abuse and exploitation under tougher regulations that would limit children’s access to social media platforms and restrict their exposure to harmful content.

“Your platforms really suck at policing themselves,” Sen. Sheldon Whitehouse, a Rhode Island Democrat, told Zuckerberg and the CEOs of X, TikTok, Discord and Snap, who were summoned to testify. Section 230 of the Communications Decency Act, which allows social media platforms to moderate content as they see fit and generally provides immunity from liability for user-generated posts, has routinely shielded tech companies from accountability. As youth harms persist, he said those legal protections are “a very significant part of that problem.” 

Whitehouse pointed to a lawsuit against X, formerly Twitter, that was filed by two men who claimed a sex trafficker manipulated them into sharing sexually explicit videos of themselves over Snapchat when they were just 13 years old. Links to the videos appeared on Twitter years later, but the company allegedly refused to take action until after they were contacted by a Department of Homeland Security agent and the posts had generated more than 160,000 views. The by the Ninth Circuit, which cited Section 230. 

“That’s a pretty foul set of facts,” Whitehouse said. “There is nothing about that set of facts that tells me Section 230 performed any public service in that regard.”

In an opening statement, Democratic committee chair, Sen. Dick Durbin of Illinois, offered a chilling description of the harms inflicted on young people by each of the social media platforms represented at the hearing. In addition to Zuckerberg, executives who testified were X CEO Linda Yaccarino, TikTok CEO Shou Chew, Snap co-founder and CEO Evan Spiegel and Discord CEO Jason Citron.

“Discord has been used to groom, abduct and abuse children,” Durbin said. “Meta’s Instagram helped connect and promote a network of pedophiles. Snapchat’s disappearing messages have been co-opted by criminals who financially extort young victims. TikTok has become a, quote, ‘platform of choice’ for predators to access, engage and groom children for abuse. And the prevalance of [child sexual abuse material] on X has grown as the company has gutted its trust and safety workforce.” 

Citron testified that Discord has “a zero tolerance policy” for content that features sexual exploitation and that it uses filters to scan and block such materials from its service. 

“Just like all technology and tools, there are people who exploit and abuse our platforms for immoral and illegal purposes,” Citron said. “All of us here on the panel today, and throughout the tech industry, have a solemn and urgent responsibility to ensure that everyone who uses our platforms is protected from these criminals both online and off.” 

Lawmakers have introduced a slate of regulatory bills that have gained bipartisan traction but have failed to become law. Among them is the Kids Online Safety Act, which would require social media companies and other online services to take “reasonable measures” to protect children from cyberbullying, sexual exploitation and materials that promote self-harm. It would also mandate strict privacy settings when teens use the online services. Other proposals would to report suspected drug activity to the police — some parents said their children overdosed and died after buying drugs on the platforms — and a bill that would hold them accountable for hosting child sexual abuse materials. 

In their testimonies, each of the tech executives said they have taken steps to protect children who use their services, including features that restrict certain types of content, limit screen time and curtail the people they’re allowed to communicate with. But they also sought to distance their services from harms in a bid to stave off regulations. 

“With so much of our lives spent on mobile devices and social media, it’s important to look into the effects on teen mental health and well-being,” Zuckerberg said. “I take this very seriously. Mental health is a complex issue, and the existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes.” 

Zuckerberg by the National Academies of Sciences, Engineering and Medicine, which concluded there is a lack of evidence to confirm that social media causes changes in adolescent well-being at the population level and that the services could carry both benefits and harms for young people. While social media websites can expose children to online harassment and fringe ideas, researchers noted, the services can be used by young people to foster community. 

In October, 42 state attorneys general , alleging that the social media giant knowingly and purposely designed tools to addict children to its services. U.S. Surgeon General Vivek Murthy warning that social media sites pose a “profound risk of harm” to youth mental health, stating that the tools should come with warning labels. Among evidence of the harms is which found that Instagram led to body-image issues among teenage girls and that many of its young users blamed the platform for increases in anxiety and depression. 

Republican lawmakers devoted a significant amount of time during the hearing to criticizing TikTok for its ties to the Chinese government, calling out the app for collecting data about U.S. citizens, including in an effort to surveil American journalists. The Justice Department is reportedly investigating allegations that ByteDance, the Chinese company that owns TikTok, used the app to surveil several American journalists who report on the tech industry. 

In response, Chew said the company launched an initiative — dubbed “Project Texas” — to prevent its Chinese employees from accessing personal data about U.S. citizens. But employees claim the company has . 

YouTube and TikTok are by far the platforms where teens spend the most hours per day, according to a 2023 Gallup survey although Neal Mohan, the CEO of Google-owned YouTube, was not called in to testify.

Mainstream social media platforms have also been exploited for domestic online extremism. Earlier this month, for example, a teenager accused of carrying out a mass shooting at his Iowa high school reportedly maintained an active presence on Discord and, shortly before the rampage, commented in a channel dedicated to such attacks that he was “gearing up” for the mayhem. Just minutes before the shooting, the suspect appeared to capture a video inside a school bathroom and uploaded it to TikTok. 

Josh Golin, the executive director of Fairplay, a nonprofit devoted to bolstering online child protections, blasted the tech executives’ testimony for being little more than “evasions and deflections.” 

“If Congress really cares about the families who packed the hearing today holding pictures of their children lost to social media harms, they will move the Kids Online Safety Act,” Golin said in a statement. “Pointed questions and sound bites won’t save lives, but KOSA will.” 

The safety act, known as KOSA, has faced pushback from civil rights advocates on First Amendment grounds, arguing the proposal could be used to censor certain content and . Sen. Marsha Blackburn, a Republican from Tennessee and KOSA co-author, said last fall the rules are important to protect “minor children from the transgender in this culture” and cited the legislation as a way to shield children from “being indoctrinated” online. The Heritage Foundation, a conservative think tank, endorsed the legislation, that “keeping trans content away from children is protecting kids.” 

Snap’s Evan Spiegel and X’s Linda Yaccarino both agreed to support the Kids Online Safety Act.

Aliya Bhatia, a policy analyst with the nonprofit Center for Democracy and Technology, said that although lawmakers made clear their intention to act, their directives could end up doing more harm than good. She said the platforms serve as “peer-to-peer learning and community networks” where young people can access information about reproductive health and other important topics that they might not feel comfortable receiving from adults in their lives. 

“It’s clear that this is a really tricky issue, it’s really difficult for the government and companies to decide what is harmful for young people,” Bhatia said. “What one young person finds helpful online, another might find harmful.”

South Carolina’s Sen. Lindsey Graham, the committee’s ranking Republican, said that social media companies can’t be trusted to keep kids safe online and that lawmakers have run out of patience.

“If you’re waiting on these guys to solve the problem,” he said, “we’re going to die waiting.” 

]]>
Teen Mental Health Crisis Pushes More School Districts to Sue Social Media Giants /article/teen-mental-health-crisis-pushes-more-school-districts-to-sue-social-media-giants/ Fri, 31 Mar 2023 12:30:00 +0000 /?post_type=article&p=706803 The teen mental health crisis has so taxed and alarmed school districts across the country that many are entering legal battles against the social media giants they say have helped cause it, including TikTok, Snap, Meta, YouTube and Google.

At least eleven school districts, one county, and one California county system that oversees 23 smaller districts have filed suits this year, representing roughly 469,000 students. 

Two others in Arizona are considering their own complaints, one superintendent told Âé¶čŸ«Æ·. Eleven districts in voted to pursue similar litigation, as did . Many others across the country are on the verge of doing the same, according to a lawyer representing a New Jersey district.


Get stories like this delivered straight to your inbox. Sign up for Âé¶čŸ«Æ· Newsletter


“Schools, states, and Americans across the country are rightly pushing back against Big Tech putting profits over kids’ safety online,” Sen. Richard Blumenthal, co-sponsor of the , bipartisan Kids Online Safety Act, told Âé¶čŸ«Æ·. “These efforts, proliferated by harrowing stories from families amid a worsening youth mental health crisis, underscore the urgency for Congress to act.” 

Algorithms and platform design have “exploited the vulnerable brains of youth, hooking tens of millions of students across the country into positive feedback loops of excessive use and abuse of Defendants’ social media platforms,” Seattle Public Schools claimed in the first suit filed this January.

Districts in Washington, Oregon, Arizona, New Jersey, and , , as well as say tech companies intentionally , exacerbating depression, anxiety, tech addiction and self-harm, straining learning and district finances. 

But the legal fight, whether tried or settled, will not be easy, outside counsel and at least one district leader said. 

“We don’t think that this is a slam dunk case. We think it’s going to be an uphill battle. But our board and I believe that this is in the best interest of our students to do this,” said Andi Fourlis, superintendent of Arizona’s largest district, Mesa Public Schools. “It’s about making the case that we need to do better for our kids.” 

Just how badly Mesa’s teens are hurting is laid out in detail in court filings: More than a third are chronically absent, 3,500 more were involved in disciplinary incidents in 2021-22 than in 2019-20 and the district has seen a “surge” in suicidal ideation and anxiety. 

Buried in the 111-page lawsuit, a high school senior’s video essay illustrates the painful impacts of social media addiction: risky or self-destructive behavior, disconnection from friends.

Simultaneously, and lawmakers are proposing bills to make platforms safer. Senate are underway, featuring parents whose children died by suicide. TikTok’s CEO this month to address concerns about exposure to harmful content. President Joe Biden flagged “,” in his last State of the Union Address.

Both legislative and legal efforts are after similar goals: changing the algorithms and product design believed to be hurting and kids. Through lawsuits, districts also seek financial compensation for the increased mental health services and training they’ve “” to establish. 

“The harms caused by social media companies have impacted the districts’ ability to carry out their core mission of providing education. The expenditures are not sustainable and divert resources from classroom instruction and other programs,” said Michael Innes, partner with Carella Byrne, Cecchi, Olstein, Brody & Agnello, a firm representing New Jersey schools.

Previous complaints against opioid and e-cigarette companies, which levied public nuisance and negligence claims as districts’ social media filings do, resulted in multimillion dollar settlements. 

But some legal experts say there’s a key distinction in this case: Big Tech companies aren’t the ones producing content on these platforms, individuals are. Companies have some hefty . 

“School districts are not in the business of suing people 
 the threshold for initiating litigation is very high,” said Dean Kawamoto, a lawyer for Keller Rohrback, the Seattle-based firm representing four districts, and thousands of others in Juul litigation. 

“I do think it says something that you’ve got a group of schools that have filed now, and I think more are going to join them,” Kawamoto added. 

Some outside counsel are . 

“I think there are questions about whether the litigation system is even a coherent way to go about this,” First Amendment scholar and Harvard Law professor Rebecca Tushnet told Âé¶čŸ«Æ·. “It’s very hard to use individual litigation to get systemic change, excepting in particular circumstances.” 

The exceptions, she added, have clear visions and specific outcomes, like requiring a doctor on-call for safer prison conditions. Those kinds of metrics are difficult to name when it comes to algorithms and mental health. 

What precedent (or lack thereof) tells us

Social media companies’ lawyers are likely to assert free speech protections early and often, including in initial motions to dismiss.

“The conventional wisdom is that if motions to dismiss are denied in cases like this, [companies] are much more likely to settle 
 reality is actually a little more mixed,” Tushnet said, adding if the claims come after business models, companies fight harder. 

An added challenge is proving causal harm — that social media companies have caused student depression, anxiety, eating disorders or self-harm. The link is one that neuroscientists and researchers are , though experts say there’s an urgent need. 

“This is a watershed moment where schools can really roll up their sleeves and do something because — not that they haven’t been in the past — but because it’s so obvious. It’s right in front of them. It’s impacting students’ education,” said Jerry Barone, chief clinical officer at Effective School Solutions, which brings mental health care to schools. 

About 13.5% of teen girls say Instagram makes thoughts of suicide worse; 17% of teen girls say it makes eating disorders worse, according to Meta’s leaked internal research, first revealed in a via .

Even if districts are able to provide proof, they may not ever see a judgment made. 

Public nuisance claims in tobacco and opioid mass torts were more successful in “inducing settlements, rather than in courthouse outcomes,” according to Robert Rabin, tort expert and professor at Stanford University. 

While he’s not “dismissive” of districts’ efforts, “the precedents don’t supply clear-cut support for the claims here.”’

The interim

As lawyers work out the details, students are left in the balance. Some are skeptical the suits will amount to anything at all, at least in their adolescence. 

“Why do you guys waste so much time on these useless things that you know get nowhere, when you can do it with things that you know will get somewhere?” said Angela Ituarte, a sophomore at a Seattle high school. 

Many young people interviewed by Âé¶čŸ«Æ· described their social media use like a double-edged sword: affirming, a place where they learned about mental health or found community, particularly for queer students of color; and simultaneously dangerous, a place where they connected with adults when they were 14 and saw dangerous diets promoted.

Social media, Ituarte said, makes it seem like self-harm and disordered eating, “are the solution to everything. And it’s hard to get that out of those algorithms — even if you block the accounts or say you’re not interested it still keeps popping up. Usually it’s when things are bad, too.”

In a late February letter to senators, Meta touted a promising initiative to on one for extended periods. Only 1 in 5 teens actually moved to a new topic during a weeklong trial. 

To curb cyberbullying, users now get warnings for potentially offensive comments. People only edit or delete their message 50% of the time, according to the company’s responses to Senate inquiries. 

Meta, YouTube and Google did not respond to requests for comment. TikTok told Âé¶čŸ«Æ· they cannot comment on ongoing litigation. The company has just started requiring users who say they are under 18 to enter a password after scrolling for an hour.

In a statement to Âé¶čŸ«Æ·, Snap said they “are constantly evaluating how we continue to make our platform safer.” Snap has partnered with mental health organizations to launch an in-app support system for users who may be experiencing a crisis, and acknowledged that the work may never be done. 

The process has only just begun. If the suits move to trial, some districts will be chosen as bellwethers to represent the many plaintiffs, tasked with regularly contributing to a lengthy trial. 

Still, there’s no doubt in Fourlis’s mind. 

“Sometimes you have to be the first to step forward to take a bold leap so that others can follow,” she said. “Being the superintendent of the largest school district in Arizona, what we do often sets precedents, and I have to be very strategic about that responsibility.”

Disclosure: Campbell Brown, Meta’s vice president of media partnerships, is a co-founder and member of the board of directors of Âé¶čŸ«Æ·. She played no role in the editing of this article.

]]>
Opinion: 5 Challenges of Doing College in the Metaverse /article/5-challenges-of-doing-college-in-the-metaverse/ Thu, 15 Sep 2022 17:00:00 +0000 /?post_type=article&p=696529 This article was originally published in

More and more colleges are becoming “,” taking their physical campuses into a virtual online world, often called the “metaverse.” One initiative has working with Meta, the parent company of Facebook, and virtual reality company VictoryXR to create 3D online replicas – sometimes called “” – of their campuses that are updated live as people and items move through the real-world spaces.

Some classes are . And VictoryXR says that by 2023, it plans to , which allow for a group setting with live instructors and real-time class interactions.

One metaversity builder, New Mexico State University, says it wants to offer degrees in which students can take all their classes in virtual reality, .


Get stories like this delivered straight to your inbox. Sign up for Âé¶čŸ«Æ· Newsletter


There are many , such as 3D visual learning, more realistic interactivity and easier access for faraway students. But there are also potential problems. My recent has focused on aspects of the metaverse and risks such as . I see five challenges:

1. Significant costs and time

The metaverse . For instance, building a cadaver laboratory costs and maintenance. A virtual cadaver lab has made scientific .

However, licenses for virtual reality content, construction of digital twin campuses, virtual reality headsets and other investment expenses do .

A metaverse course license can cost universities . VictoryXR also charges a per student to access its metaverse.

Additional costs are incurred for virtual reality headsets. While Meta is providing a for metaversities launched by Meta and VictoryXR, that’s only a few of what may be needed. The low-end 128GB version of the Meta Quest 2 . Managing and maintaining a large number of headsets, , involves additional operational costs and time.

Colleges also need to spend significant time and resources to . Even more time will be required to deliver metaverse courses, many of which will need .

Most educators don’t have the , which can involve merging videos, still images and audio with text and interactivity elements into an .

2. Data privacy, security and safety concerns

Business models of companies developing metaverse technologies . For instance, people who want to use Meta’s Oculus Quest 2 virtual reality headsets must have Facebook accounts.

The headsets can collect highly personal and sensitive data . Meta has that advertisers might have to it.

Meta is also working on a high-end virtual reality headset called , with more advanced capabilities. Sensors in the device will allow a virtual avatar to maintain eye contact and make facial expressions that mirror the user’s eye movements and face. That data information and target them with personalized advertising.

Professors and students may not freely participate in class discussions if they know that all their moves, their speech and even their facial expressions are .

The virtual environment and its equipment can also collect a wide range of user data, such as , and even signals of emotions.

Cyberattacks in the metaverse could even cause physical harm. Metaverse interfaces , so they effectively trick the user’s brain into believing the user is in a different environment. can influence the activities of immersed users, even inducing them to , such as to the top of a staircase.

The metaverse can also . For instance, Roblox has launched to bring 3D, interactive, virtual environments into physical and online classrooms. Roblox says it has , but no protections are perfect, and its metaverse involves user-generated content and a chat feature, which could be or people or other .

3. Lack of rural access to advanced infrastructure

Many metaverse applications such as . They require high-speed data networks to handle all of the across the virtual and physical space.

Many users, especially in rural areas, . For instance, 97% of the population living in urban areas in the U.S. has in tribal lands.

4. Adapting challenges to a new environment

Building and launching a metaversity requires drastic changes in a school’s approach to and learning.
For instance, metaverse but active participants in virtual reality games and other activities.

The combination of advanced technologies such as can create personalized learning experiences that are not in real time but still experienced through the metaverse. Automatic systems that tailor the content and pace of learning to the ability and interest of the student can make learning in the metaverse , with fewer set rules.

Those differences require significant , such as quizzes and tests. Traditional measures such as individualized and unstructured learning experiences offered by the metaverse.

5. Amplifying biases

Gender, racial and ideological biases are common in textbooks of and , which influence how students understand certain events and topics. In some cases, those biases prevent the achievement of justice and other goals, such as .

Biases’ effects can be even more powerful in rich media environments. are at views than textbooks. has the potential to be .

To maximize the benefits of the metaverse for teaching and learning, universities – and their students – will have to wrestle with protecting users’ privacy, training teachers and the level of national investment in broadband networks.The Conversation

This article is republished from under a Creative Commons license. Read the .

]]>
Meet the Gatekeepers of Students’ Private Lives /article/meet-the-gatekeepers-of-students-private-lives/ Mon, 02 May 2022 11:15:00 +0000 /?post_type=article&p=588567 If you are in crisis, please call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), or contact the Crisis Text Line by texting TALK to 741741.

Megan Waskiewicz used to sit at the top of the bleachers, rest her back against the wall and hide her face behind the glow of a laptop monitor. While watching one of her five children play basketball on the court below, she knew she had to be careful. 

The mother from Pittsburgh didn’t want other parents in the crowd to know she was also looking at child porn.

Waskiewicz worked as a content moderator for Gaggle, a surveillance company that monitors the online behaviors of some 5 million students across the U.S. on their school-issued Google and Microsoft accounts. Through an algorithm designed to flag references to sex, drugs, and violence and a team of content moderators like Waskiewicz, the company sifts through billions of students’ emails, chat messages and homework assignments each year. Their work is supposed to ferret out evidence of potential self-harm, threats or bullying, incidents that would prompt Gaggle to notify school leaders and, .

As a result, kids’ deepest secrets — like nude selfies and suicide notes — regularly flashed onto Waskiewicz’s screen. Though she felt “a little bit like a voyeur,” she believed Gaggle helped protect kids. But mostly, the low pay, the fight for decent hours, inconsistent instructions and stiff performance quotas left her feeling burned out. Gaggle’s moderators face pressure to review 300 incidents per hour and Waskiewicz knew she could get fired on a moment’s notice if she failed to distinguish mundane chatter from potential safety threats in a matter of seconds. She lasted about a year.

“In all honesty I was sort of half-assing it,” Waskiewicz admitted in an interview with Âé¶čŸ«Æ·. “It wasn’t enough money and you’re really stuck there staring at the computer reading and just click, click, click, click.”

Content moderators like Waskiewicz, hundreds of whom are paid just $10 an hour on month-to-month contracts, are on the front lines of a company that claims it saved the lives of 1,400 students last school year and argues that the growing mental health crisis makes its presence in students’ private affairs essential. Gaggle founder and CEO Jeff Patterson has warned about “a tsunami of youth suicide headed our way” and said that schools have “a moral obligation to protect the kids on their digital playground.” 

Eight former content moderators at Gaggle shared their experiences for this story. While several believed their efforts in some cases did shield kids from serious harm, they also surfaced significant questions about the company’s efficacy, its employment practices and its effect on students’ civil rights.

Among the moderators who worked on a contractual basis, none had prior experience in school safety, security or mental health. Instead, their employment histories included retail work and customer service, but they were drawn to Gaggle while searching for remote jobs that promised flexible hours. 

They described an impersonal and cursory hiring process that appeared automated. Former moderators reported submitting applications online and never having interviews with Gaggle managers — either in-person, on the phone or over Zoom — before landing jobs.

Once hired, moderators reported insufficient safeguards to protect students’ sensitive data, a work culture that prioritized speed over quality, scheduling issues that sent them scrambling to get hours and frequent exposure to explicit content that left some traumatized. Contractors lacked benefits including mental health care and one former moderator said he quit after repeated exposure to explicit material that so disturbed him he couldn’t sleep and without “any money to show for what I was putting up with.”

Gaggle content moderators encompass as many as 600 contractors at any given time and just two dozen work as employees who have access to benefits and on-the-job training that lasts several weeks. Gaggle executives have sought to downplay contractors’ role with the company, arguing they use “common sense” to distinguish false flags generated by the algorithm from potential threats and do “not require substantial training.” 

While the experiences reported by Gaggle’s moderator team platforms like Meta-owned Facebook, Patterson said his company relies on “U.S.-based, U.S.-cultured reviewers as opposed to outsourcing that work to India or Mexico or the Philippines,” as . He rebuffed former moderators who said they lacked sufficient time to consider the severity of a particular item.

“Some people are not fast decision-makers. They need to take more time to process things and maybe they’re not right for that job,” he told Âé¶čŸ«Æ·. “For some people, it’s no problem at all. For others, their brains don’t process that quickly.”

Executives also sought to minimize the contractors’ access to students’ personal information; a spokeswoman said they only see “small snippets of text” and lacked access to what’s known as students’ “personally identifiable information.” Yet former contractors described reading lengthy chat logs, seeing nude photographs and, in some cases, coming upon students’ names. Several former moderators said they struggled to determine whether something should be escalated as harmful due to “gray areas,” such as whether a Victoria’s Secret lingerie ad would be considered acceptable or not. 

“Those people are really just the very, very first pass,” Gaggle spokeswoman Paget Hetherington said. “It doesn’t really need training, it’s just like if there’s any possible doubt with that particular word or phrase it gets passed on.” 

Molly McElligott, a former content moderator and customer service representative, said management was laser focused on performance metrics, appearing more interested in business growth and profit than protecting kids. 

“I went into the experience extremely excited to help children in need,” McElligott wrote in an email. Unlike the contractors, McElligott was an employee at Gaggle, where she worked for five months in 2021 before taking a position at the Manhattan District Attorney’s Office in New York. “I realized that was not the primary focus of the company.”

Gaggle is part of a burgeoning campus security industry that’s seen significant business growth in the wake of mass school shootings as leaders scramble to prevent future attacks. Patterson, who founded the company in 1999 by that could be monitored for , said its focus now is mitigating the .

Patterson said the team talks about “lives saved” and child safety incidents at every meeting, and they are open about sharing the company’s financial outlook so that employees “can have confidence in the security of their jobs.”

Content moderators work at a Facebook office in Austin, Texas. Unlike the social media giant, Gaggle’s content moderators work remotely. (Ilana Panich-Linsman / Getty Images)

‘We are just expendable’

Under the pressure of new federal scrutiny along with three other companies that monitor students online, it relies on a “highly trained content review team” to analyze student materials and flag safety threats. Yet former contractors, who make up the bulk of Gaggle’s content review team, described their training as “a joke,” consisting of a slideshow and an online quiz, that left them ill-equipped to complete a job with such serious consequences for students and schools.

As an employee on the company’s safety team, McElligott said she underwent two weeks of training but the disorganized instruction meant her and other moderators were “more confused than when we started.”

Former content moderators have also flocked to employment websites like Indeed.com to warn job seekers about their experiences with the company, often sharing reviews that resembled the former moderators’ feedback to Âé¶čŸ«Æ·.

“If you want to be not cared about, not valued and be completely stressed/traumatized on a daily basis this is totally the job for you,” one on Indeed. “Warning, you will see awful awful things. No they don’t provide therapy or any kind of support either.

“That isn’t even the worst part,” the reviewer continued. “The worst part is that the company does not care that you hold them on your backs. Without safety reps they wouldn’t be able to function, but we are just expendable.” 

As the first layer of Gaggle’s human review team, contractors analyze materials flagged by the algorithm and decide whether to escalate students’ communications for additional consideration. Designated employees on Gaggle’s Safety Team are in charge of calling or emailing school officials to notify them of troubling material identified in students’ files, Patterson said.

Gaggle’s staunchest critics have questioned the tool’s efficacy and describe it as a student privacy nightmare. In March, Democratic Sens. Elizabeth Warren and Ed Markey and similar companies to protect students’ civil rights and privacy. In a report, the senators said the tools could surveil students inappropriately, compound racial disparities in school discipline and waste tax dollars.

The information shared by the former Gaggle moderators with Âé¶čŸ«Æ· “struck me as the worst-case scenario,” said attorney Amelia Vance, the co-founder and president of Public Interest Privacy Consulting. Content moderators’ limited training and vetting, as well as their lack of backgrounds in youth mental health, she said, “is not acceptable.”

In to lawmakers, Gaggle described a two-tiered review procedure but didn’t disclose that low-wage contractors were the first line of defense. CEO Patterson told Âé¶čŸ«Æ· they “didn’t have nearly enough time” to respond to lawmakers’ questions about their business practices and didn’t want to divulge proprietary information. Gaggle uses a third party to conduct criminal background checks on contractors, Patterson said, but he acknowledged they aren’t interviewed before getting placed on the job.

“There’s a lot of contractors. We can’t do a physical interview of everyone and I don’t know if that’s appropriate,” he said. “It might actually introduce another set of biases in terms of who we hire or who we don’t hire.”

‘Other eyes were seeing it’

In a previous investigation, Âé¶čŸ«Æ· analyzed a cache of public records to expose how Gaggle’s algorithm and content moderators subject students to relentless digital surveillance long after classes end for the day, extending schools’ authority far beyond their traditional powers to regulate speech and behavior, including at home. Gaggle’s algorithm relies largely on keyword matching and gives content moderators a broad snapshot of students’ online activities including diary entries, classroom assignments and casual conversations between students and their friends. 

After the pandemic shuttered schools and shuffled students into remote learning, Gaggle oversaw a surge in students’ online materials and of school districts interested in their services. Gaggle as educators scrambled to keep a watchful eye on students whose chatter with peers moved from school hallways to instant messaging platforms like Google Hangouts. One year into the pandemic, Gaggle in references to suicide and self-harm, accounting for more than 40% of all flagged incidents. 

Waskiewicz, who began working for Gaggle in January 2020, said that remote learning spurred an immediate shift in students’ online behaviors. Under lockdown, students without computers at home began using school devices for personal conversations. Sifting through the everyday exchanges between students and their friends, Waskiewicz said, became a time suck and left her questioning her own principles. 

“I felt kind of bad because the kids didn’t have the ability to have stuff of their own and I wondered if they realized that it was public,” she said. “I just wonder if they realized that other eyes were seeing it other than them and their little friends.”

Student activity monitoring software like Gaggle has become ubiquitous in U.S. schools, and 81% of teachers work in schools that use tools to track students’ computer activity, according to a recent survey by the nonprofit Center for Democracy and Technology. A majority of teachers said the benefits of using such tools, which can block obscene material and monitor students’ screens in real time, outweigh potential risks.

Likewise, students generally recognize that their online activities on school-issued devices are being observed, the survey found, and alter their behaviors as a result. More than half of student respondents said they don’t share their true thoughts or ideas online as a result of school surveillance and 80% said they were more careful about what they search online. 

A majority of parents reported that the benefits of keeping tabs on their children’s activity exceeded the risks. Yet they may not have a full grasp on how programs like Gaggle work, including the heavy reliance on untrained contractors and weak privacy controls revealed by Âé¶čŸ«Æ·â€™s reporting, said Elizabeth Laird, the group’s director of equity in civic technology. 

“I don’t know that the way this information is being handled actually would meet parents’ expectations,” Laird said. 

Another former contractor, who reached out to Âé¶čŸ«Æ· to share his experiences with the company anonymously, became a Gaggle moderator at the height of the pandemic. As COVID-19 cases grew, he said he felt unsafe continuing his previous job as a caregiver for people with disabilities so he applied to Gaggle because it offered remote work. 

About a week after he submitted an application, Gaggle gave him a key to kids’ private lives — including, most alarming to him, their nude selfies. Exposure to such content was traumatizing, the former moderator said, and while the job took a toll on his mental well-being, it didn’t come with health insurance. 

“I went to a mental hospital in high school due to some hereditary mental health issues and seeing some of these kids going through similar things really broke my heart,” said the former contractor, who shared his experiences on the condition of anonymity, saying he feared possible retaliation by the company. “It broke my heart that they had to go through these revelations about themselves in a context where they can’t even go to school and get out of the house a little bit. They have to do everything from home — and they’re being constantly monitored.” 

In this screenshot, Gaggle explains its terms and conditions for contract content moderators. The screenshot, which was provided to Âé¶čŸ«Æ· by a former contractor who asked to remain anonymous, has been redacted.

Gaggle employees are offered benefits, including health insurance, and can attend group therapy sessions twice per month, Hetherington said. Patterson acknowledged the job can take a toll on staff moderators, but sought to downplay its effects on contractors and said they’re warned about exposure to disturbing content during the application process. He said using contractors allows Gaggle to offer the service at a price school districts can afford. 

“Quite honestly, we’re dealing with school districts with very limited budgets,” Patterson said. “There have to be some tradeoffs.” 

The anonymous contractor said he wasn’t as concerned about his own well-being as he was about the welfare of the students under the company’s watch. The company lacked adequate safeguards to protect students’ sensitive information from leaking outside the digital environment that Gaggle built for moderators to review such materials. Contract moderators work remotely with limited supervision or oversight, and he became especially concerned about how the company handled students’ nude images, which are reported to school districts and the . Nudity and sexual content accounted for about 17% of emergency phone calls and email alerts to school officials last school year, . 

Contractors, he said, could easily save the images for themselves or share them on the dark web. 

Patterson acknowledged the possibility but said he wasn’t aware of any data breaches. 

“We do things in the interface to try to disable the ability to save those things,” Patterson said, but “you know, human beings who want to get around things can.”

‘Made me feel like the day was worth it’

Vara Heyman was looking for a career change. After working jobs in retail and customer service, she made the pivot to content moderation and a contract position with Gaggle was her first foot in the door. She was left feeling baffled by the impersonal hiring process, especially given the high stakes for students. 

Waskiewicz had a similar experience. In fact, she said the only time she ever interacted with a Gaggle supervisor was when she was instructed to provide her bank account information for direct deposit. The interaction left her questioning whether the company that contracts with more than 1,500 school districts was legitimate or a scam. 

“It was a little weird when they were asking for the banking information, like ‘Wait a minute is this real or what?’” Waskiewicz said. “I Googled them and I think they’re pretty big.”

Heyman said that sense of disconnect continued after being hired, with communications between contractors and their supervisors limited to a Slack channel. 

Despite the challenges, several former moderators believe their efforts kept kids safe from harm. McElligott, the former Gaggle safety team employee, recalled an occasion when she found a student’s suicide note. 

“Knowing I was able to help with that made me feel like the day was worth it,” she said. “Hearing from the school employees that we were able to alert about self-harm or suicidal tendencies from a student they would never expect to be suffering was also very rewarding. It meant that extra attention should or could be given to the student in a time of need.” 

Susan Enfield, the superintendent of Highline Public Schools in suburban Seattle, said her district’s contract with Gaggle has saved lives. Earlier this year, for example, the company detected a student’s suicide note early in the morning, allowing school officials to spring into action. The district uses Gaggle to keep kids safe, she said, but acknowledged it can be a disciplinary tool if students violate the district’s code of conduct. 

“No tool is perfect, every organization has room to improve, I’m sure you could find plenty of my former employees here in Highline that would give you an earful about working here as well,” said Enfield, one of 23 current or former superintendents from across the country who Gaggle cited as references in its letter to Congress. 

“There’s always going to be pros and cons to any organization, any service,” Enfield told Âé¶čŸ«Æ·, “but our experience has been overwhelmingly positive.”

True safety threats were infrequent, former moderators said, and most of the content was mundane, in part because the company’s artificial intelligence lacked sophistication. They said the algorithm routinely flagged students’ papers on the novels To Kill a Mockingbird and The Catcher in the Rye. They also reported being inundated with spam emailed to students, acting as human spam filters for a task that’s long been automated in other contexts. 

Conor Scott, who worked as a contract moderator while in college, said that “99% of the time” Gaggle’s algorithm flagged pedestrian materials including pictures of sunsets and student’s essays about World War II. Valid safety concerns, including references to violence and self-harm, were rare, Scott said. But he still believed the service had value and felt he was doing “the right thing.”

McElligott said that managers’ personal opinions added another layer of complexity. Though moderators were “held to strict rules of right and wrong decisions,” she said they were ultimately “being judged against our managers’ opinions of what is concerning and what is not.” 

“I was told once that I was being overdramatic when it came to a potential inappropriate relationship between a child and adult,” she said. “There was also an item that made me think of potential trafficking or child sexual abuse, as there were clear sexual plans to meet up — and when I alerted it, I was told it was not as serious as I thought.” 

Patterson acknowledged that gray areas exist and that human discretion is a factor in deciding what materials are ultimately elevated to school leaders. But such materials, he said, are not the most urgent safety issues. He said their algorithm errs on the side of caution and flags harmless content because district leaders are “so concerned about students.” 

The former moderator who spoke anonymously said he grew alarmed by the sheer volume of mundane student materials that were captured by Gaggle’s surveillance dragnet, and pressure to work quickly didn’t offer enough time to evaluate long chat logs between students having “heartfelt and sensitive” conversations. On the other hand, run-of-the-mill chatter offered him a little wiggle room. 

“When I would see stuff like that I was like ‘Oh, thank God, I can just get this out of the way and heighten how many items per hour I’m getting,’” he said. “It’s like ‘I hope I get more of those because then I can maybe spend a little more time actually paying attention to the ones that need it.’” 

Ultimately, he said he was unprepared for such extensive access to students’ private lives. Because Gaggle’s algorithm flags keywords like “gay” and “lesbian,” for example, it alerted him to students exploring their sexuality online. Hetherington, the Gaggle spokeswoman, said such keywords are included in its dictionary to “ensure that these vulnerable students are not being harassed or suffering additional hardships,” but critics have accused the company of subjecting LGBTQ students to disproportionate surveillance. 

“I thought it would just be stopping school shootings or reducing cyberbullying but no, I read the chat logs of kids coming out to their friends,” the former moderator said. “I felt tremendous power was being put in my hands” to distinguish students’ benign conversations from real danger, “and I was given that power immediately for $10 an hour.” 

Minneapolis student Teeth Logsdon-Wallace, who posed for this photo with his dog Gilly, used a classroom assignment to discuss a previous suicide attempt and explained how his mental health had since improved. He became upset after Gaggle flagged his assignment. (Photo courtesy Alexis Logsdon)

A privacy issue

For years, student privacy advocates and civil rights groups have warned about the potential harms of Gaggle and similar surveillance companies. Fourteen-year-old Teeth Logsdon-Wallace, a Minneapolis high school student, fell under Gaggle’s watchful eye during the pandemic. Last September, he used a class assignment to write about a previous suicide attempt and explained how music helped him cope after being hospitalized. Gaggle flagged the assignment to a school counselor, a move the teen called a privacy violation. 

He said it’s “just really freaky” that moderators can review students’ sensitive materials in public places like at basketball games, but ultimately felt bad for the contractors on Gaggle’s content review team. 

“Not only is it violating the privacy rights of students, which is bad for our mental health, it’s traumatizing these moderators, which is bad for their mental health,” he said. Relying on low-wage workers with high turnover, limited training and without backgrounds in mental health, he said, can have consequences for students. 

“Bad labor conditions don’t just affect the workers,” he said. “It affects the people they say they are helping.” 

Gaggle cannot prohibit contractors from reviewing students’ private communications in public settings, Heather Durkac, the senior vice president of operations, said in a statement. 

“However, the contractors know the nature of the content they will be reviewing,” Durkac said. “It is their responsibility and part of their presumed good and reasonable work ethic to not be conducting these content reviews in a public place.” 

Gaggle’s former contractors also weighed students’ privacy rights. Heyman said she “went back and forth” on those implications for several days before applying to the job. She ultimately decided that Gaggle was acceptable since it is limited to school-issued technology. 

“If you don’t want your stuff looked at, you can use Hotmail, you can use Gmail, you can use Yahoo, you can use whatever else is out there,” she said. “As long as they’re being told and their parents are being told that their stuff is going to be monitored, I feel like that is OK.” 

Logsdon-Wallace and his mother said they didn’t know Gaggle existed until his classroom assignment got flagged to a school counselor. 

Meanwhile, the anonymous contractor said that chat conversations between students that got picked up by Gaggle’s algorithm helped him understand the effects that surveillance can have on young people. 

“Sometimes a kid would use a curse word and another kid would be like, ‘Dude, shut up, you know they’re watching these things,’” he said. “These kids know that they’re being looked in on,” even if they don’t realize their observer is a contractor working from the couch in his living room. “And to be the one that is doing that — that is basically fulfilling what these kids are paranoid about — it just felt awful.” 

If you are in crisis, please call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), or contact the Crisis Text Line by texting TALK to 741741.

Disclosure: Campbell Brown is the head of news partnerships at Facebook. Brown co-founded Âé¶čŸ«Æ· and sits on its board of directors.

]]>