Transcription of the episode “Surveillance in school: Invasive technology, junk science”

[00:00:15] Jon M: Hi, I’m Jon Moscow. 

[00:00:17] Amy H-L: And I’m Amy Halpern-Laff. Welcome to Ethical Schools. Today we’re speaking with Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project, or S.T.O.P., and Sarah Roth, an AmeriCorps fellow at STOP. We’ll discuss surveillance technology in schools. Welcome Albert and Sarah.

[00:00:37] Albert F C: Thank you so much for having us. 

[00:00:39] Sarah R: Thank you. 

[00:00:41] Jon M: Albert, who is surveilling K through 12 students, and for what purposes? 

[00:00:47] Albert F C: The short version is we don’t really know. We see a broad spectrum of companies that are selling increasingly invasive forms of mass surveillance to educators all across the country. But the problem is that we don’t know what exactly they’re buying, who’s buying it, how it’s being used, or what the impact is. All we know for sure is there’s a lot of junk science being sold and a lot of potential to harm. 

[00:01:14] Jon M: What are some examples? 

[00:01:17] Albert F C: Well, one of the things that’s really remarkable about the mass surveillance we see being rolled out in educational spaces is just how evidence-free so much of it tends to be. We see invasive forms of surveillance being rolled out to try to predict who is going to be a threat to their school in the future and who is going to commit harm against themselves. And these would be incredibly important things to know, except the technology just doesn’t really work because we don’t yet have some magical crystal ball that could predict these sorts of things in the future.

But we also see surveillance being used to track students’ movements. We see it being used to track what they say online. We see it being used to track how they use school-provided computers. We see school districts that are transforming the hardware they provide into nothing short of an electronic monitoring device.

And to me as a civil rights lawyer, it’s truly alarming that we’ve allowed the transition to remote learning, the transition to computer-based learning, to become a foothold for schools to monitor so much of their students’ lives. 

[00:02:27] Amy H-L: Who’s collecting this data, and where is it likely to show up and impact students?

[00:02:33] Albert F C: So we can go through some of the specifics. And Sarah, are there any specific companies you’d like to start off by talking about? Because there’s no shortage of examples in this area. 

[00:02:45] Sarah R: Yeah, so we can talk about internet and a computer keyboard. Surveillance technology, which Albert mentioned, had really taken a foothold during the Covid 19 pandemic because it had switched over to virtual school and classrooms. So what these sort of companies do — one for example, is Gaggle — is that they look at students’ internet and computer activity on school accounts, the images that they pull up on their screen, that they send to each other as students via email the assignments that they have and cloud tools. For example, English assignments are one thing that these companies won’t particularly scan for, for references to self-harm. They also pick up keywords such as words that refer to identity. Queer identities are such terms that they’ll, flag to school administrators and that they’ll even send to government agencies. 

[00:03:34] Amy H-L: Sarah, could you give us an example of how this data is turned over to government agencies and for what purpose?

[00:03:41] Sarah R: Yeah. I apologize because I always forget the name of this agency. 

[00:03:45] Albert F C: National Center for Missing and Exploited Children. 

[00:03:48] Sarah R: Yeah. For example, is one group that they’ll turn over references to if a student identifies themselves as queer on these sort of platforms. That is one example.

Another is that these tools always say that they don’t sell this data to third parties, but they are able to provide it to law enforcement with a warrant request. 

[00:04:10] Albert F C: So just at a high level why don’t we map out the different ways that information can flow. So you have these different companies, surveilling students. They can provide information to school officials who in turn can provide it to law enforcement. You can see them providing it to law enforcement directly, either voluntarily or in response to a subpoena or warrant. You can also see law enforcement partnerships with school districts. So one particularly chilling example came from Pasco County, Florida, where we saw a sheriff’s department that was engaging in a realtime data sharing partnership with school officials where they were getting academic records and then feeding it into this junk science predictive policing approach that the sheriff had as a way to try to target students as being a risk in the future based off of what their grades had been in the past. And as with many of these forms of surveillance, there’s a lot of potential for abuse. There’s a lot of potential for discrimination. There’s a lot of ways that can reinforce existing biases. And there’s a really abject failure in order to actually safeguard against any of those potential harms. And even more concerning, there’s no evidence that actually works. 

And so I think that is one of the many interesting case studies we see. Because we often hear about the protections provided under federal laws like FERPA, under state privacy laws, but oftentimes these laws provide quite limited protections against data sharing with law enforcement. And that’s something that continues to be really of concern. 

[00:05:49] Jon M: The example that you gave, Sarah, of a student self-identifying as queer. And I’m just thinking about if the students in Florida where they’ve just created these laws forbidding counselors and teachers in schools from offering any kind of support to LGBTQ students. The ramifications seem really enormous, just from the very fact of this data being collected. And it doesn’t seem as though the school or the state certainly would have any kind of legitimate purpose for it, since they’ve just forbidden offering any support to students. Am I missing anything?

[00:06:37] Sarah R: Absolutely. Especially because this is all under surveillance and it’s compliant with local and state law enforcement for, for LGBTQ plus students. And in states like Florida, I can only imagine the mental health ramifications of tools like this, and also for the laws that are discriminating against trans youth as well. Other ramifications that’ll result from parents supporting their children, teachers supporting trans students. It’s nightmarish. 

[00:07:07] Albert F C: I think it’s actually worse than that because some of the bills we’ve seen in Florida and other jurisdictions actually create an affirmative obligation for the school district to notify a parent, for example, if a student uses a different pronoun in the classroom or in the school building. And if that is interpreted to extend to online learning spaces that could be interpreted by some schools as creating a legal requirement to surveil students to actually ensure that when they’re using school provided devices, that they’re not self-identifying as trans, because in that circumstance, they would have to notify parents.

So it really becomes, we use the term Orwellian, we use these terms because we don’t really have a way of encapsulating just how bleak it is, but it truly is a level of invasiveness that is hard to imagine.

[00:08:03] Amy H-L: Aside from keywords, how do they determine these risk factors?

[00:08:10] Albert F C: So in the predictive policing and self-harm space, a lot of it’s done through keyword monitoring. So for example, there will be a list of prohibited words. If one of those is used, that could be enough to trigger an alert. But you also see increasingly sophisticated uses of natural language processing that claim to be able to analyze if someone is at risk.

So Khan Academy is, you know, one of the leading online instruction platforms in the country. In a matter of years, it’s gone from being a YouTube channel to being deployed for hundreds of thousands of kids all across the country. And they, as part of an affiliated private school in Palo Alto, actually rolled out an AI chat bot recently as a way to give personalized tutoring to students. But as part of that, they were saying that they were training the chat bot to detect self-harm. And so what gets really concerning with those sorts of deployments is again, we’re dealing with technologies that have barely been vetted, that are highly experimental. You’re trying to train them to make these life altering decisions of whom to flag and whom not to flag. And then, as part of that, you’re incorporating all the biases of AI. 

And so with the chatbot with the natural language processing example, there are a lot of concerns about how the sort of biases we’ve seen with other forms of AI, particularly biases against neurodivergent students, would lead to a lot of students being falsely accused of posing a threat to themselves or others simply because their minds don’t map on to the training dataset. And if we’ve seen one pattern from the way AI is trained, it’s trained for a very ableist cohort. And when it encounters people who don’t fit that mold, suddenly it, it treats difference as a danger. 

[00:10:08] Jon M: So there’s been a lot of concern, obviously, about the role and the presence of school resource officers. How does this technology fit in with any of that concern? 

[00:10:22] Albert F C: Well, surveillance is a force multiplier for those in power. It’s particularly a force multiplier for the police, and you can easily see the ways that this sort of technology can lead to more adverse encounters with school resource officers, which can lead to more arrests. And so it really is a surveillance to prison pipeline where students can be targeted because of this technology, and then see this rapid escalation. So many of us have seen the horrifying images of low level confrontations with students and school resource officers that turned violent. And it’s very easy to see how being flagged by one of these surveillance systems can lead someone into that sort of altercation.

[00:11:14] Jon M: And are they sometimes using surveillance systems as an alternative to school resource officers? 

[00:11:22] Sarah R: I suppose Gaggle in the Minneapolis school district. I wouldn’t say that it was an alternative to school resource officers because it was deployed during COVID when there wasn’t a campus, so school resource officers naturally weren’t there. But the contract that they had, the school resource officers, I believe also ended that year. So I suppose that in a way, in this very particular sort of invasive way that it’s looking at student mental health. I wouldn’t say it’s fully replacing the role of the school resource officer rather than enhancing it and targeting this one particular issue. But I think that with a lot of other technology, like AI training models that are meant to spot, for example, the likelihood of the school shooter. Classrooms are not like on social media, although school resource officers do use social media monitoring as a resource. Sorry. Computer visioning models that look in the hallways and classrooms and doors propped open that school resource officers do respond to. 

AI is also biased, especially against disabled and neurodivergent students, who may need to walk around in an empty hallway that isn’t usually populated, just to get away from overstimulating situations. And then a school resource officer is called to respond to that sort of situation because of this hyper monitoring. 

[00:12:37] Amy H-L: Are parents aware that their students are being watched? 

[00:12:42] Albert F C: Lawyers love to say it depends, but that’s always the case when it comes to these policies because there are so many thousands of different local school boards that get to control a lot of these decisions over what technologies are used and what policies apply when they’re deployed. And so we see some school districts that are relatively transparent about the terrible technology that they’re putting in in the school buildings. But there are cases where this technology is used in secret for years without any disclosure to parents. That was the case in Pasco County, but that’s been the case in countless other school districts with smaller scale surveillance programs as well.

And here today, I am just worried that this is about to get a lot worse because we see a lot of federal money potentially flowing into this space to accelerate the acquisition of novel technologies to track students. We see a lot more products being sold to school districts, and we see no requirements in the vast majority of jurisdictions to actually reveal what’s being done.

There’s a small number of municipalities that have what we call CCO bills or community control of police surveillance laws that require schools and other government agencies to publish a list of every surveillance tool they use and to actually get public feedback on it. But that’s an outlier. That’s not the norm. There’s really a clear need for greater protections at the federal, state, and local level. 

[00:14:18] Amy H-L: Have there been legal challenges to this? 

[00:14:20] Albert F C: Yes, we’ve definitely seen a lot of lawsuits, and I do think that for the school districts that use this technology, they’re acting at their own peril. We’ve seen pushback, particularly in public schools and public universities, against the use of surveillance within the homes. This was most prominent in the remote proctoring space during the peak of the pandemic, where we saw a lot of schools deploying invasive surveillance into the home to try to prevent cheating. It was easy to work around those protections. We had so many TikTok videos of people brazenly cheating even when they had this sort of surveillance being used. But there’s some practices such as scanning your room for the proctor to show that there’s no one else there that have been struck down by the courts. There was one case in Ohio where they found that this was a violation of the Fourth Amendment, which protects against unreasonable searches and seizures, for a public university to require that sort of video surveillance of someone’s home. 

But it’s a different situation for private schools because the Fourth Amendment only applies to governments, so when it’s a private institution, they have much more latitude in what data they’ll collect. And it’s also potentially a trickier case when it comes to K-12 education. The Supreme Court has pushed back on some of the more extreme claims from school officials, that they have carte blanche to regulate students’ behavior on social media and whatever they do outside the outside of school. There was a case just a couple years ago where someone was punished for obscenities on social media. The court said that was unconstitutional, that it was a suppression of free speech. There are other cases, particularly when it comes to surveilling students on school grounds, where the courts have afforded educators quite a bit of latitude to to surveil students. I think we’re gonna see a lot of interesting cases in the months ahead. 

[00:16:28] Jon M: I’m curious whether Freedom of Information Act requests are a tool people can use or whether, because of issues of confidentiality, it would be difficult to actually be able to get any data by filing them. What’s your experience been? 

[00:16:46] Albert F C: It comes back to that favorite expression. It depends. So FOIL, freedom of information law, as we call it in New York State, or Freedom of Information Act, FOIA as it’s known federally. Every state has one of these, but they vary tremendously in the sort of access that they give people to public records. Some states like Florida traditionally have been quite good about complying, or states like New York have been, in legal terms, a dumpster fire when it comes to FOIL compliance. So it really varies, but at least in theory, in almost every jurisdiction, FOIL should be a tool by which to get some of the information about what’s being purchased and what’s being done. There are broad carve-outs for law enforcement practices, but those shouldn’t be a categorical bar to knowing that you’re wasting thousands of dollars in a school district on some new invasive form of technology. 

One thing I would recommend for any parents or educators who may be interested in this. So disclaimer, none of this is legal advice. I am not licensed to practice in any jurisdiction other than New York State. However, in my limited anecdotal, non-scientific experience, ChatGPT is actually pretty good at generating freedom of information requests. And so if you are curious about what your school is purchasing, what technologies they have, and how that information is being used, that’s the sort of thing that ChatGPT might actually be a good thing for humanity, if you start using it to create these sorts of requests. 

[00:18:26] Jon M: I have a question about facial technology, facial recognition, and the situation that arose in New York State a couple of years ago with the Lockport School District. Could you speak very briefly about what that was and what the implications of of that are, both in New York and potentially elsewhere, and where it stands. 

[00:18:48] Albert F C: Yeah, so Lockport I think is really the [inaudible] of everything that goes wrong in the surveillance landscape. It’s a school district upstate that had a plan to install facial recognition to monitor its students. Now, the problem is the plan seemed to never have a consistent rationale for why they were doing it. Sometimes they said, well, this is a way to prevent school shootings. The problem is that, as horrifying as school shootings are, and as much as we would all want to have some technological fix to prevent them, the vast majority of school shootings involve an individual who has a legal right to be on that property. And so having a facial recognition system actually wouldn’t identify the vast majority of school shooters because they would be students or other people who had a connection to that school. Then we also heard justifications around Covid and contact tracing. We heard justifications about all sorts of things, but what we saw, thankfully, in New York was that common sense prevailed and we passed legislation that outlawed facial recognition in K12 education, that made it illegal for any public school in New York State to use this technology. And I should say facial recognition is really concerning in K12 education because the technology is actually more error prone for younger individuals. When you use facial recognition to scan a child, it’s more likely to be wrong.

But on top of that, we already know facial recognition for people of every age. Is more likely to be error prone if they are Black or Latinx or Asian, if they are trans or non-binary, if they’re a woman rather than a man. Basically anyone other than me, a middle-aged white dude, is suddenly going to be faced with a higher propensity for falsely being flagged by this technology. And that has led to false arrest, that’s led to wrongful imprisonment, that has led to all sorts of harms. And when you can imagine using that to track children and and potentially to track their [inaudible], it’s just a recipe for disaster, especially the chilling impact it can have for those with criminal justice involvement or for those who are undocumented.

Sorry, I’ve been talking a lot. Sarah, anything you wanted to add on that point? 

[00:21:16] Sarah R: No, that’s okay. I also just wanted to flag, I was doing some minor research that I found out the Minneapolis school district, shortly after terminating their contract with school resource officers, the police they made a new one. So definitely I would say that Gaggle and these sort of tools are really just an enhancement to the system that is already in place as Albert had mentioned. 

[00:21:40] Amy H-L: Aside from facial recognition, does this, how is this surveillance biased, perhaps in in favor of the middle-aged white dude? 

[00:21:51] Albert F C: First of all, even where there’s a technically neutral system that’s being deployed without any algorithmic bias, without any bias in how the technology operates, it’s being operated by human beings, who bring their own biases to those systems. Ruha Benjamin at Princeton has written really wonderfully on the topic and shown how there cannot be such a thing as a neutral technology in the hands of systemically racist institutions. So that’s the first problem. 

But then you look at other aspects of this technology and, and there’s bias baked right in. For example, with some of the predictive policing, predictive behavioral health surveillance technologies, they will be trained on a non-representative sample set of users. So it won’t be capturing the behavior of everyone. They’ll be using behaviors of a smaller sample set to define what’s normal and what’s healthy, and then that has allowed potential to discriminate.

But then on top of that, oftentimes these systems, particularly keyword search based systems, are just running the search words that they dislike through Google Translate and coming up with a list of what terms are problematic in other languages. So for example, this isn’t a K-12 example, but it’s an example of how this technology operates. A sheriff’s department in Long Island, New York that is part of the jail has a program to monitor every phone call. And as part of that, they are searching for words that they think are connected with criminality. And they just came up with the list of Spanish words using Google Translate. And when native Spanish speakers saw what was being flagged, it was like I, I heard it described as “you had to laugh, you had to cry, you had to get angry” because it’s just so bad at identifying what’s truly a threat. 

[00:23:51] Jon M: I just wanted to go back for a second to the Lockport situation. The law you mentioned, just to clarify, is strictly applicable to New York State. Is that correct? So that says nothing about what might be happening elsewhere in the country. 

[00:24:04] Albert F C: Yeah, there really aren’t any meaningful federal protections against biometric surveillance. And if anything, federal laws are only pushing in the direction of more surveillance. We’ve seen additional funding after the tragedy in Uvalde, we’ve seen money from the CARES Act being redirected towards school surveillance. And so really, at the federal level, the Biden administration had a real opportunity to bring civil rights into the forefront as part of the Department of Education’s response to the very real threats that schools face. But instead they’ve really embraced far too much of this technology.

[00:24:45] Amy H-L: Is there anything either of you would like to add to this conversation that we haven’t talked about? 

[00:24:53] Albert F C: Sarah, you first. 

[00:24:55] Sarah R: I suppose we had gone over sort of Khan Academy’s use of AI in classrooms and flagging, instances where a student might mention self-harm or indicators of self-harm. And I just thought that that was a very important thing to mention because this sort of technology is newer to classrooms, but there’s a lot of attention and and hype around it, especially the sort of technology that’s incorporating as Jon mentioned, facial recognition and other sorts of biometric competing [inaudible] the technology. And while there isn’t widespread adoption of it yet, but I believe there is going to be. 

Taken with everything that we had talked about today, technology, biases, adoption of the technology into classrooms where it can flag students who are perhaps frustrated with the lesson or are just expressing their normal way and the technology picks it up as being inattentive or lacking motivation. These are all things that have a great impact on equity in classrooms. So that’s important, I think, to keep in mind as well. 

[00:25:56] Albert F C: And I would add that we’ve talked a lot about the sort of surveillance that’s looking for acts of violence and really extreme events, but there are a lot of forms of surveillance that are far less extreme but getting baked into technology platforms. So we see attention tracking. We see more subtle eye tracking. We see all these ways that we are being monitored constantly on Google and Microsoft platforms, to be measured in how we operate. And this is something that probably is a concern for teachers as well, because we know that increasingly employers are using these tools to measure, ” productivity” by reducing us to this measure of how frequently we’re engaging with these electronic platforms. It’s almost gotten to the point where it’s like that old image of Homer Simpson getting the bobblehead that goes up and down, hitting a key on the keyboard just so they think he’s at work. And that’s truly what we’re incentivizing with a lot of these monitoring tools. And so I am worried that we’ll see more schools using this as a way to measure, well, how much homework are you doing? How engaged are you? How much are you doing on the group project? How much time are you spending as a teacher preparing your, curriculum? There’s so many different ways this sort of automated assessment technology could creep into the K-12 educational environment. I think it would just be a really big mistake.

[00:27:32] Jon M: Thank you, Albert Fox Cahn and Sarah Roth of Surveillance Technology Oversight Project. 

[00:27:39] Albert F C: Thank you so much for having us. 

[00:27:41] Sarah R: Thank you. 

[00:27:43] Jon M: And thank you, listeners. Check out our new video series. “What would YOU do?” A collaboration with EdEthics/Justiceinschools.org. Go to our website, ethicalschools.org, and click Video. The goal of the series is not to provide right answers, but to illustrate a variety of ethical viewpoints. 

If you found this podcast worthwhile, please share it with a friend or colleague. Subscribe wherever you get your podcasts and give us a rating or review. This helps others to find the show.

Check out our website for more episodes and articles and to subscribe to our monthly emails. We post annotated transcripts of our interviews to make them easy to use in workshops or classes. We can work with consultants to offer customized SEL programs with a focus on ethics for schools and youth programs in the New York City and San Francisco areas.

Contact us at hosts@ethicalschools.org. We’re on Facebook, Instagram, and Twitter @EthicalSchools. Our editor and social media manager is Amanda Denti. Until next week.

Click here to listen to this episode.