Transcription of the episode “Technology and ethics: AI in the school”

[00:00:15] Jon M: I’m Jon Moscow.

[00:00:16] Amy H-L: And I’m Amy Halpern-Laff. Welcome to Ethical Schools. Today, we speak with Robbie Torney, director of AI Programs at Common Sense Media, a nonprofit that educates teachers, parents, and students on media and technology, and advocates for safety measures to protect young children and students. Welcome, Robbie.

[00:00:38] Robbie T: Jon and Amy, it’s great to be here. Thanks for having me.

[00:00:41] Amy H-L: Would you tell us about Common Sense Media and its work on AI?

[00:00:46] Robbie T: Yeah, Common Sense Media is a nonprofit. We’ve been around for roughly 20 years, and we’ve always been thinking about how we can make tech and media beneficial and supportive for kids, families, and schools.

One of the ways that I think about that is we’ve been through so many waves of media and technology in the past 20 years. There’s been first, you know, the internet and then smartphones and then social media. And AI is one of the most recent waves related to that. And it has some things in common with the previous waves. It has some things that are different from those other waves. In some ways, it’s an intensifier of those other waves, but it’s hard to believe that it’s maybe just three and a half, four years since that ChatGPT moment when generative AI really burst onto the scene, the public consciousness.

But its impact is definitely being felt in K-12 schools. It’s definitely been experienced by teachers and students, and has really changed, in many ways, how kids and teachers have experienced teaching and learning. It’s raised a lot of fundamental questions about what teaching means and what learning means, and it’s been a disruptive force.

We are optimistic that AI can be part of a solution. It’s just a tool, it’s another technology. But as with any technology, it’s going to require education. It’s going to require conversations. It’s going to require ethical use, you know, to the topic of your podcast. I’m excited to dive in and talk about that with you all today.

[00:02:19] Jon M: What are the principles that you look at when you evaluate AI programs?

[00:02:24] Robbie T: Well, when we conduct AI risk assessments, there are eight AI principles that we’ve defined, and this is our list of what we believe that AI should do. I won’t run through them all. We can definitely drop a link for people to look at specifically. But these are things like: Be effective. Does the technology do what it says it does? Be trustworthy. Does it produce results that are believable and engender trust? Keep kids and teens safe. Do the programs meet safety requirements, protect people’s data. Use responsibly, is the name of that principle. Specifically, is the use of data, responsible, ethical, fair, et cetera. And I think taken together, this is our articulation of what we believe AI should do if it is to benefit its teens at schools.

[00:03:16] Amy H-L: And for what purposes are teachers at schools using AI programs?

[00:03:24] Robbie T: This is an area where I think we’ve done a lot of research at Common Sense Media, just tracking the rollout of AI. And I think the headline here is that there’s been a gap between student adoption and teacher adoption of AI. We did a research poll called The Dawn of AI in 2024, and in that poll, roughly 70% of students saId that they were using AI for both personal and school-related reasons. So, school-related reasons are completing homework, working on projects. And personal reasons are entertainment, companionship. But teachers had not adopted AI at that rate, and parents hadn’t either. There were, there were large gaps there, and parents reported not hearing from schools. 80% of parents saId that they had not heard from their school about whether or not their kid could use AI or should use AI.

And I think that’s been a place where there’s been a wide range of experiences. And I think that listeners will probably have had a range of experiences. There’s been early adopters, where AI programs have been used for lots of different things, and then there have been school systems that haven’t done much at all, where teaching and learning have progressed as if AI isn’t really a factor. And I think the one thing that we would underline is that teaching and learning, regardless of whether or not you believe that AI should be used, kids are using AI according to our research, and I think that that is an important thing to acknowledge because we have an obligation as educators; we have a dual role to both inform and teach kids about how to use the technology responsibly, and also to learn ourselves about how it can support our practice.

I think people focus a lot on the time saving components of it, but there’s also maybe some opportunities to deepen our own impact. So, Amy, to answer your question specifically. People are using AI for lesson planning-related activities. They’re using AI for assessment-related activities. They’re using AI for coaching-related activities. They’re using AI for observation-related activities. And I think, as I’m just laying out some of these purposes, what you’re hearing is across every stage of the teaching and learning cycle, there are some AI applications, and that’s because AI is just a tool, right. It’s just an application of technology. And you can imagine that in the same way we use computers across all phases of the teaching and learning cycle, that there might be some applications of AI machine learning across all of those, those different phases as well.

Now, according to our risk assessments, there are certainly some areas where there may be better uses or more risky uses or areas that we have to pay more attention to, and I think we can certainly get into that. But I guess I want educators more broadly just to understand that this is a technology, this is a tool, and anywhere where you could think about using technology, you could imagine applying this technology to solve the unique needs to address the goals, the vision, the mission of your school, your district. This is about applying a general-purpose technology to the specific needs of your context.

[00:06:42] Jon M: So, we’re going to want to get into a lot of the things that you just mentioned, but before we do, I have a basic question, which is especially ‘because you mentioned data safety, especially as we know that young people are using AI in and out of school. What is happening with the data, and who gets the data, and how is it being used, and is it anonymized? I don’t know if I’m even pronouncing that right, but, uh…

[00:07:12] Robbie T: Anonymized.

[00:07:13] Jon M: Anonymized. So what happens when you’re talking to a chatbot? What happens with that?

[00:07:21] Robbie T: It’s a great question, Jon, and the answer is it depends. I think in previous generations of technology, if you’re using a product for free, you kind of are the product, right. And if you look at the terms of service for that product, in many cases, the company has the right to retain your data to use it to train future models, to keep it indefinitely. And I think that that’s a particular concern when you look at some of these AI companions. For example, the terms of service [inaudible] Character AI allow that company, which is a social AI companion that simulates relationships, right. So the bot will pretend to be a friend or a girlfriend, or a mentor or a real person. When you chat with that bot, the company can keep your chats forever and use that data for any purpose. And in the data training race where data is gold and is used to traIn AI models, that’s risky, especially when you think about teens sharing some of their most intimate thoughts and secrets with some of these models.

We released another nationally representative survey recently about how teens use AI companions, and about one in 14 said that they shared personal secrets with AI companions. And if that’s really the case, knowing that some companies may be retaIning and training on your data is an important thing to be aware of.

Now, there are other cases where you may be paying for a service where that’s not the case. And I think in school settings where there are particular laws or particular requirements related to student data privacy, you’re going to be more likely to have better practices in place related to data retention, data storage, data privacy, et cetera. And I think that’s one of the reasons why we encourage school districts to adopt products, if they’re going to be used, that get teachers and students off of these free versions because they offer better protections. And it’s one of those reasons why having everyone just using their own products, so where a teacher in one class is using one product and a teacher in another class is using another free product, and students are using their own free products, that creates conditions where there’s a lot of risk for data to be lost, hacked, stolen, misused, et cetera, down the line.

[00:09:50] Jon M: So, what are AI teacher assistants, and what are their major advantages, and what are the major risks that you see?

[00:10:00] Robbie T: AI teacher assistants are a special class of AI technology, and you can think of them as a maybe friendlier and more approachable, designed for purpose, class of AI products. A lot of people are familiar with your general AI chat chatbot, your ChatGBT, your Gemini, your Claude. And when you go to those chatbots, you get a chat interface, which is basically a prompt box at the bottom of a screen, and that’s really open-ended. These systems can be used for a lot of different things. They can be used for lesson planning, they can be used for assessment design. They can be used for all the school-specific tasks, but they’re not specifically designed for that. And that can be, you know, a little bit daunting for an educator to think about how you might use that tool in the tech space. This is called the Cold Start problem. You give a user a tool and they’re not quite sure how they get started. So these AI teacher assistants solve the Cold Start problem because they have a user interface and a design that helps you approach common tasks that are designed for your industry or your job a little bit more easily.

So you’ll see lesson planning tools, you’ll see assessment tools, you’ll see brainstorming tools, you’ll see coaching tools, et cetera. And in our risk assessment of AI teacher assistants, we did a deep dive look at four products that are very popular on the market right now. The Gemini Teacher Assistant, which is integrated in Google Classroom, Conmigo, which is the teacher assistant, also the Conmigo Tutor, which is designed for student use, Curapod, and Magic School AI. And these all have a variety of different tools and functions that are designed to help teachers with day-to-day tasks, save teachers time, and increase teacher impact.

[00:12:01] Amy H-L: How do these tools fit in with teachers’ goal to be culturally sensitive?

[00:12:08] Robbie T: That is a great question, and I think Amy, in response to that, I would say we, in this risk assessment, really clearly try to lay out what AI tools know and what AI tools don’t know. So, AI tools in general have some training that comes from the broad training that language models have, that they’ve gotten from data that exists broadly on the internet. This is what allows them to generate text or generate images, and it’s a really useful feature.

But it’s what’s at the heart of a functionality that allows them to make worksheets or to organize information or to respond to the tasks that you put into them. And that’s powerful. It can be quite time-saving to have an AI teacher assistant make you three different versions of a worksheet or to take a text and to break it into a different format for an activity. What they don’t have is pedagogical knowledge. So they don’t have any knowledge about the best way to teach an activity. They don’t have knowledge about the students in your classroom or how they learn best.

And to your specific question, they actually don’t have knowledge about how to teach in a culturally sensitive way. They don’t know what that means. They have been trained on a lot of text about culturally sensitive teaching. They may be able to generate something that sounds plausible about culturally sensitive teaching, but they’re not actually going to be able to do anything that resembles culturally sensitive teaching. And I think the key message here is that teacher expertise is actually quite irreplaceable in the usage of these tools. If you approach one of these tools and you have an idea for how to make a culturally a sensitive lesson, and you need help generating materials related to that, this tool could save you a bunch of time. If you have taught a culturally sensitive lesson in a specific way in a previous year and you want to modify it for a different grade level or you have an idea about a particular practice that is a really good fit for your school community, and you need to expand it so that it’s a school-wide activity, or you’re changing the format from a class-based activity to a school assembly-based format, those are examples of things that these assistants could be really good at.

They on their own are not going to be able to generate the specific strategies that are going to make an activity culturally sensitive. They might be able to help you brainstorm or thought partner on some ideas that could be a good fit to make an activity more culturally sensitive. So you could use it for inquiry, but your expertise as a teacher is going to be absolutely essential in doing that work. So they’re not going to save you time and energy on that front. And I think for many teachers, that’s going to be reassuring. That is the most human part of the work. That is the most interesting part of the work, and that is the part of the work that isn’t going to be automated anytime soon.

[00:15:06] Amy H-L: Are AI programs useful as assistive or adaptive technology, say for students with dyslexia?

[00:15:16] Robbie T: Yes, I think, is the short answer, and I think the longer answer is it’s going to also depend on teacher expertise, being able to tell the system how to adapt material, in what way to adapt the material, to be able to evaluate the outputs, to make sure that it’s been adapted appropriately and correctly to give the system feedback if it hasn’t done it the right way. Human oversight matters so much, and if you can tell the system exactly how you want output modified for a particular purpose or reason, systems are really good at following through instructions They can save you so much time, effort, and energy, but you have to have the expertise to be able to check and make sure that they’re able to do so correctly.

[00:16:01] Jon M: So, it’s clear from what you’re saying that these things can be helpful if you have the expertise and if you’re thinking of them as something that’s going to build on your expertise, but that if they’re presented or used without the expertise or as a replacement, that they’re not only not going to be helpful, but that they can be harmful.

What have you seen in terms of how schools and districts and for that matter, schools of education, are adapting? Is there a general recognition, would you say and a willingness and an ability to put in the time to make it clear that these are enhancements and not replacements?

[00:16:51] Robbie T: I think it really varies by school, honestly. And I think that schools are all over the place catching up. There are places that have been very thoughtful and proactive and have been on the cutting edge. And then there there’s still places that are catching up and I think that mirrors what we’ve seen in previous waves of technology and reform and other, you know, changes in waves in education.

This is a microcosm of change that we’ve seen in education, I think, in this country. And it doesn’t feel in some ways that different than other types of changes that we’ve seen in education. So I think in terms of places that are doing this right, there has been a appropriate situation or a placement of the technology as an assistant with also appropriate training around how to use the tool.

It’s not a hey, we’re going to be be doing training on AI. It’s we are working on assessment right now and the professional development about assessment has some components about how AI can be used to support assessment, right? It’s a tool that is part of the broader work of the school. While, of course, knowing that there’s some skill-based work that some educators are going to need. I think that it’s just one of these places also where educators are just needing to have the right calibration in terms of what the technology is good at. I think the marketing hype around some of these tools is really incredible. Teachers are being told that these tools are literally magic and that they can save them so much time and that they are so good at so many different things. And if that is the message that you’re hearing about these tools, that is kind of different than the message of , Hey, your expertise is super important, and you’re still going to have to make all the decisions and you’re going to have to tell the model exactly what to do and you’re going to have to supervise it really carefully and you’re going to have to check its outputs in a bunch of different ways, and you as a school community are going to have to decide which use cases are good and which cases are off limits and why. That just feels a little bit of a different message.

And I think that’s also a place where school communities do need to come together and appropriately calibrate to be like yes, this is why we’re using AI. We’re not using it because we think it’s a magical technology. We’re using it because it enhances our ability to serve students. It enhances our mission, it enhances our ability to get students college and career ready. We think it’s important to get students using these tools in some way, shape, or form, because it helps us with our goals of be sure that they’re ready for the workforce down the line. Whatever the unique vision of the school is, it has to be rooted at that local context and that pragmatism I think that many educators are so good at embodying, but we have to get past the marketing hype that exists around some of these products. So that is a place where I think there is some healthy skepticism. Many educators have lived through the previous generation of the EdTech Revolution and have been sold or heard a lot of big promises about what EdTech was going to do for schools and haven’t necessarily seen that come to life.

So I think the message that I would tell educators is, let’s be open-minded. There’s a huge opportunity for these tools to be helpful. Let’s also be critical where necessary, and question some of the promises. And then let’s be realistic about what the tools are good at and what they’re not good at, so that we can match the uses of the technology to what the technology is actually good at.

[00:20:09] Amy H-L: What are some of the special risks of using AI teacher assistants for IEPs and and behavior plans?

[00:20:18] Robbie T: Yeah, so in some of these teacher assistants there are what we call “high stakes use cases” for IEPs, behavior plans, progress reports, things like this. And these are high stakes, because these documents, in some cases, are legal documents and other cases, what is put on these documents could impact student progress over multiple years. And if we zoom out and think about what is AI? What does AI, what does it not know? As I was running through that list, one of the things that AI doesn’t specifically know, is that these models don’t have a good understanding of what’s going on in classrooms. They don’t have good observational data. They’re not watching students or student interactions day in and day out. They don’t have a good understanding in the case of behavioral data, of what happened before a behavior happened, what happened afterwards, how the student responded. They don’t understand the relationships that exist in a classroom.

And that’s just to say that there’s a mismatch between what you can put in that prompt box and the very rich data sources that a behavioral interventionist or a skilled clinician would be monitoring if they were doing behavioral intervention work. And whenever you see these mismatches… I think IEPs are another great example. When you’re completing an IEP, you’re going to be doing present levels. There’s going to be a psychoeducational assessment, there’s going to be an academic assessment, there’s going to be observational data, there’s going to be an interview with parents, there’s going to be an interview with the student, there’s going to be an interview with staff. There’s going to be lots of data collection. There’s going to be work sample analysis. There’s a lot of work that needs to go into assembling an IEP. And then when you look at some of the tools, it’s , here’s the box, put in the student’s disability. Name some things that they struggle with and it generates an IEP document that looks very plausible, but isn’t actually about anything specific to that student.

You start to see this mismatch of, okay, the system is generating something based off of very limited data, but it’s not actually rooted in data that’s specific to that student. The first risk then is that you are getting very plausible-sounding information that isn’t actually rooted in the types of information that AI would need to be able to actually generate those data types.

The second risk that I would name, that we uncovered in our testing, is a type of invisible bias that we call “invisible influence.” And this is specific to the behavior intervention tools, although it’s not unique to the use of behavior intervention tools. And it’s rooted in a characteristic of language models that is specific to their ability to infer information about people or things based on very small or limited information about them. So in this case, a single piece of information, which is the name of a student. So, in this particular test scenario, what we did is we provided the behavior intervention tool with a structured input about a student who was experiencing a behavioral challenge. And the only thing we changed was the student name. So we ran a prompt, a large number of times using a coded name, a white female-coded name, a white male-coded name, a Black female-coded name, and a Black male-coded name. And then we compared the results in the aggregate, agaIn, looking at the outputs when aggregated.

And I think one of the reasons why I think this is an important thing to dig into, is that, and why we call this “invisible influence,” is that I was an educator for a long time before I came to work at Common Sense Media and I taught and worked in East Oakland. So I’ve done a lot of reflecting and work on bias and cultural competency and teaching across difference. And obviously, I still have more work to do in that area, as do we all. But the only reason I contextualize that, as I’m sharing this, is that I think I’m usually pretty good at identifying potential bias as it’s written down in documents I’ve seen over the years. Teachers write down things about students or parents write down things about students where I’m like okay, that teacher believes a certain thing about that student that may not be true. Or that parent believes a certain thing about that other student that may not be true or is rooted in a biased belief about that student.

And when I was looking at the outputs of say, the Black name, I was not necessarily seeing those overt lags where I was, oh, that output is biased. I was these strategies that are being provided are positively framed. They are specific, they are helpful. I could see this being really helpful if I was developing a, behavior intervention plan for this particular student, this could be a good base. But when we aggregated the outputs, we actually started to see some very striking differences between the outputs. So the white female-coded name outputs actually got the most supportive outputs. They were the longest, they had the most student agency. The students got the most say in how the plan went.

The Black male-coded outputs were the shortest, they were the most directive. And there were other output differences that we highlight specifically in the report. But you weren’t able to see these differences unless you were doing this very comparative analysis in these large data sets of outputs.

And that’s not how teachers are using these tools necessarily. They’re generating large numbers of identical outputs. They’re not analyzing them and saying, oh, there are these differences in how the models are performing when they’re looking at different student names based on this analysis that we did.

And, you know, sharing it back with the companies, Google actually removed the behavior intervention planning tool from Gemini and Google Classroom, which is a responsible choice and one that we appreciate. And I saId this phenomenon that we demonstrated in this particular case, this is a well-understood phenomenon or of these language models.

There was another study that another group of researchers did where folks who were doing salary negotiations with ChatGPT, were getting different outputs based on whether they were putting in a male-coded name or a female-coded name. And I think this is just one of these fundamental things that educators need to understand, which is that these models can behave in unpredictable, almost invisible ways in some cases.

And that does mean that there are some types of use cases grading or behavior plans or other cases that where, if the models have access to certain types of student data, you have to be very careful to make sure that they’re not behaving in ways that are biased that you, as the educator, might not be able to see.

[00:26:53] Jon M: So, given that the answers or the guidance that they give, whether you’re talking about in terms of behavior or in terms of curriculum, may be subtle, are there, and you’ve talked obviously about how teachers’ expertise is key, but are there specific things that you can recommend that teachers can integrate into their practice to be checking in the moment on whether they’re getting the best answers? Because as you saId, you may not notice it when it’s one student or even one class, but where it would be clear if you were looking at it in a much larger scale, which of course teachers don’t have access to.

[00:27:39] Robbie T: Yeah. So I think the answer here is that when you start off with your teacher assistant and you say, okay, there’s 50 tools here, the first job of the community of educators is to say, which ones are the low risk tools or the lower risk tools, and which ones are the higher risk tools? And we’ve done a little bit of that work in our risk assessment to help identify which ones are how you think about that, which ones are the ones that are likely to have an impact on student progression if they were to get things wrong, which ones are higher stakes in the sense that if they produced a biased output, it would have a bigger impact on students? And you can kind of just X those out and be those ones are maybe unacceptably risky because if they got the outputs wrong and we were not able to see the bias or errors in those outputs, it would be damaging to students, right. So, if you then go through that sorting exercise, and you say, okay, we’ve eliminated as a school community and saId we’re not going to use these functions of the teacher assistant because we think they’re too risky.

We have these other ones and now we’re going to go ahead and identify our practices for providing oversight. And I think we give a couple of very specific examples of how oversight does need to be, or we think oversight needs to be provided in the risk assessment. One is just to don’t put yourself in the situation where you are pushing slides live to students without having reviewed the content ahead of time. So think about how you build in the time and space to just double check and verify content before you put it in front of kids. And some people say, okay, well AI isn’t really then saving teachers any time ‘because they have to double check and verify the content. And I guess to that, I always say, well, teachers have always had to double check and verify content even with textbooks or even with worksheets or even with internet content, right. That has always been a core part of the job. It’s just an especially important part of the job when you’re using AI-generated content. it just, it hasn’t really changed.

Another important part is if AI is giving students feedback of any kind, if that’s something that your school community is important with, making sure you check the feedback before it goes to students is really important. We have a couple of examples of how AI feedback can not be totally right or contextually aware, and it’s just really important to have that human in the loop.

And then I think the third is there are particular areas where AI teacher assistants or AI systems in general tend to have a little bit more trouble. So, complex historical events are one example. We did a lot of testing with the California missions as one example, where the Indigenous California perspective is underrepresented in the training set. So the colonial Spanish perspective is overrepresented in the training set and most language models and most teacher assistants tend to produce a overly rosy, overly positive spin on what life was under Spanish rule. So it’s important to recognize that if you are a history teacher, if you are teaching content that is complicated politically or is touching on, on issues that are a little bit more controversial, right? if you’re a civics teacher, and maybe this feels a little bit of a no-braIner. Those issues have always needed to be dealt with a little bit more carefully in schools. They’ve always required a little bit more oversight, and they’re going to continue to require a little bit more oversight and a little bit more human judgment. That hasn’t gone away, but that just means you’re not going to rely on the language model to do the framing for you, your expertise still matters, you need to provide the framing.

And I think to that end, one of the last suggestions I would give in this vein is the, we provide a suggestion in the risk assessment, which we call grounding the model. Instead of relying on the training data of the internet writ large, if you provide your own curriculum, your own notes, your own specific examples, whatever uploaded information that you have that is specific to your own treatment of the lesson of your subject area, that will be incredibly helpful because it’s going to make the treatment of that material specific to how you want to teach it. And it’s going to go a long way towards mitigating any biases that might exist in the overall training set of the model writ large.

[00:32:10] Amy H-L: I’m specifically interested in how these tools influence the teaching of slavery, you know, especially in light of the current administration.

[00:32:23] Robbie T: Yeah, unsurprisingly, it’s just a really fraught and really complicated topic. Some models will just straight up refuse to generate information about slavery, right, which is a form of erasure in and of itself. Others, you know, I think similar to what I shared about the California missions, will produce a perspective that is not a representative perspective. And I think when I zoom out on this, though, I don’t actually fault the models for this because one of the most fundamental questions in history, if you think about this as a history teacher, is whose story gets told. How do we know what history is? How do historians understand what actually happened? These are questions about historical truth. These are questions about primary sources. These are questions about multiple perspectives and those are questions that we as humans struggled with and we as history teachers struggled with before language models, right. these were questions that you might have gotten a different perspective on the treatment of slavery in some classrooms in this country. You definitely got a different perspective on the treatment of slavery in classrooms in some regions of this country before there were chatbots, right. This is another situation where these systems are mirrors for issues that exist in our society, or tensions or flashpoints that exist in society. And the models themselves aren’t going to solve those issues. They’re going to reflect those issues. And this is a place where we need to, again, ground the models. So if we have a perspective, if we are using the Facing History materials to teach about the enslavement of African-American people, or we’re using the Teaching Mockingbird curriculum to teach To Kill a Mockingbird or something that, if we have a particular approach that we think is the right approach for our school, the grounding of the models with those approaches is going to produce a result and outputs that are aligned with the values, the approach, the theory, the mindsets that we think matter, right. And you’re not going to get those same outputs if you don’t provide those inputs to the model, you’re going to get something that that is different, that’s the average.

And I think all of that is to say technology has never been values-agnostic. Technology has always been value-laden. And these systems continue to be laden with values and will be transmitting and reflecting and amplifying the values of the educators and the school systems that are using them.

[00:34:55] Jon M: So, I have a technical question, a clarifying question. You referred to tools and functions. What do you mean?

[00:35:05] Robbie T: That’s a great question. I guess what I’m saying there is tools are the models themselves, and within some of the teacher assistants there are a variety of specific tools. There’s buttons that you can click on that do things like lesson planning or assessment generation or behavior planning, I think you could define those as tools.

And then there are behaviors that I think I’m imprecisely calling functions, that the models may use to accomplish those tasks. And these are things, in this particular case, that I was referring to as uploading content to ground the model. But there are other things that we haven’t talked about, the use of, sometimes models will do web search to find information that doesn’t exist in their training data. And that’s another place where you get interesting results. But Jon, and so I don’t know specifically what I saId, I was just riffing, so please forgive me if I saId something that was not technically precise.

[00:35:58] Jon M: No, no. I was trying to envision when you were saying that as you’re looking at whether some things that the AI program might do are more risky than others, and that okay, we’re not going to do that because it’s just too risky. So, and you gave the example, for example, that you might not use a for behavioral counseling, whereas you might use it for lesson planning. Yeah. And I was just trying to get an idea of what kind of differentiation a school or a teacher can make because I don’t have any image at all of what it’s actually looking when…

[00:36:35] Robbie T: Oh, yeah. Can I show you?

[00:36:37] Jon M: Yeah, well you can show us, it won’t show to our listeners, so it’s better to describe it.

[00:36:42] Robbie T: Yeah. Let me, I will talk one through so that we can take a quick look and so that we can imagine the task that may be in front of educators. I’m just getting logged into one right here. So this is just one example of a teacher assistant. There’s a lot of them. So I’m not going to, you know, name it by name, but when I look at this one, there are a lot of tools in here. Some of them are: there’s an image generator in here, there’s a lesson planning tool, there’s a writing feedback tool, there’s a text translator, there’s an email generator tool, there’s a teacher jokes tool, there’s a report card comment tool, there is an exemplar and non-exemplar tool, there’s the behavior intervention suggestions tool, which is one that we’ve been talking about over the course of the day today, there is the 504 plan generator, there is the advanced learning plan tool. So there’s a bunch of tools that are in here, and as I look at this list, I think one of the things that is tricky is that there are some tools that are pretty low risk and pretty straIghtforward that are sitting right next to some tools that are pretty high risk and probably shouldn’t be used. And if you are just an individual teacher and you are just using this tool by yourself, it’s kind of hard to make that determination.

And I think that’s part of why we make the recommendation and the AI risk assessment report that for these tools to be used, they have to be implemented as part of an entire school community coming around and saying, hey, we are all going to be using an AI teacher assistant and we’re all going to be using it in these particular ways. And as part of that inventorying exercise I was describing earlier, you could go through and have a group of teachers think about the exercise of identifying the behavior intervention suggestions tool. That could be really risky because it doesn’t have the…, like if I actually click on it, right, it’s just asking for a grade level and areas to support. Does it have the context that is really needed to be able to provide the support that would be needed for a student?

I’m just going to, Jon, I’m just going to generate one for you right now. I’m just going to say, Jon has a hard time listening, right. And I could upload a file, but I don’t have to. And I’m just going to press generate right now, and it’s going to tell me that I could use some active listening strategies, some visual aids and prompts and some frequent check-ins. And you know, not bad suggestions, maybe that’s a good place to start. But these are not really customized to why Jon has having a hard time listening. They are not specific to his specific needs. Is Jon having a hard time listening in particular classes? Is he having a hard time listening because the materials are not engaging? Is a peer distracting Jon? there’s all kinds of reasons why Jon might be having a hard time listening and these tools aren’t necessarily producing the most tailored and helpful results. And then when you layer on some of the biased outputs that we’ve found in tools like this based on student names, that’s a place where we would say, okay, let’s not use this one in our community. But let’s take a look at the text translator tool. That’s actually pretty low risk and a really easy way for us to make texts more accessible for kids.

So in a situation where you have a lot of functions or a lot of features available to you, starting off with an inventorying exercise can be a great way to de-risk the use of an AI teacher assistant.

[00:40:07] Jon M: Thanks. That’s very helpful. So part of it is the tool and part of it is the prompts, right, that lead to some of these incomplete or biased results?

[00:40:21] Robbie T: Yes, exactly. So it’s okay, there’s the tool and how it’s set up, but then there’s also how am I as the teacher asking the model to give me the information that is needed, right. So saying I need a lesson plan on ecosystems, that’s not very specific. It doesn’t say what grade level it’s for, it doesn’t say what’s come before or what’s come after, it doesn’t have all the knowledge that we talked about in terms of the standard, the lesson, what my students need, the pedagogical expertise that I have. It is actually going to probably feel like work to construct the prompt or to upload the materials. It’s going to feel iterative. You’re going to have to provide information over multiple passes and to provide feedback to these models. It’s not going to feel zero effort. It isn’t going to feel magic, but it is going to save you time potentially, right. in some of the testing that we did do as part of this, you know, again, former educator here. I definitely spent a lot of time at 10:00 PM making multiple versions of a worksheet or making a slide deck for a particular group of students for part of a project. I was able to, and our team was able to differentiate lesson materials very effectively or to create highly scaffolded questions that were aligned with standards or create materials that were appropriately differentiated for English learners. But it all came down to the directions that we gave the teacher assistants. And it was all, again, rooted in how specific and how much of our own teacher expertise we were able to activate within the model, because the models don’t have that teacher expertise. They have superpowers related to being able to synthesize information and generate contents and organize information. And it’s the pairing of those two sets of expertise that can be really helpful. And I think it all goes back to how we have named this class of technology. We call them teacher assistants because that’s what we believe that they are, they are assistants for teachers. You could think of them as if you had a helper that could help you make some materials and get things organized and draft some worksheets and help you set up some texts, right. There’s a bunch of tasks that you could assign an assistant. If you had a teacher assistant in your classroom, you wouldn’t delegate a bunch of instructional decisions to a teacher assistant. You would have the judgment to understand which instructional decisions you would keep to yourself. And I think that is the orientation you have to have to using AI in the classroom, which is, there are some decisions that I keep for myself and there’s others that I am going to delegate to AI, and then I’m going to check the assistant’s work. Because it is an assistant, it is not an expert.

[00:43:01] Amy H-L: Your report had some very specific steps that teachers should take when using these teacher assistants. Maybe you could go through those.

[00:43:09] Robbie T: Sure. I think I’ve spoken a little bit about this already, but I, the four specific steps that we suggest for teachers are, first, just start with your curriculum and material. So this is that grounding that we’ve talked about. So this is upload your existing lesson plans, your curriculum guides, your instructional materials, your context before you have the AI teacher assistant generate any contents. AI teacher assistants are going to do better when they enhance what you already have, and you’re always going to have to make sure that their outputs are aligned with your curriculum sequence, your context, and your teaching goals.

The second is just really being aware that you have to provide pedagogical context. The models don’t know about students’ learning needs and challenges. They don’t know what strategies and approaches work best for specific contents for specific kids in your class or for specific profiles of learners. You’re going to have to give examples of successful lessons or activities when you’re asking for similar contents.

We do say you have to review everything before learning. I know again, for some folks, they say, well, that’s why would I use AI if I have to check all the outputs? But again, I do believe that teachers have always had to check content before putting it in front of kids. But you do need to double check outputs to make sure that they’re accurate. You’re looking for overt bias, appropriateness, and that you need to edit and refine things to make sure that they match what you’re looking for, and that they’re connected to what came before and what came after. And then to the editing and choosing the appropriate use cases.

There are just some high stakes use cases that are probably not a good fit for these tools. So IEPs, 504 plans, behavior planning, progress reporting. If you have anything that is requiring a lot of information about students, that is requiring multiple sources of information over long periods of time, that should just set off your teacher spidey sensor alert that, hey, this is probably not a good use of AI.

And that if they are producing outputs, it’s going to be unlimited information and it’s not going to actually be a good fit for what these models are good at.

[00:45:16] Amy H-L: Thanks. Often when new technologies are introduced, districts buy them and then just hand them to teachers to use, and teachers complain, number one, that they haven’t been consulted, and number two, that they aren’t given the training to know how to use them. Are you finding this is the case with the AI models?

[00:45:40] Robbie T: I mean, again, I think this is a microcosm of other patterns of technology adoption. So, I think there’s been different experiences that we’ve seen in different school districts. There’s some where that certainly has been the experience and there’s others where rollout has been done thoughtfully.

I do think one of the things that is true right now is that we’re in a moment where funding is a little bit tighter than it has been maybe during the pandemic or pre-pandemic, and also where schools are a little bit savvier coming out of the last ed tech revolution. So I think there has been less rush to just adopt maybe than there has been in previous ed tech adoptions, because schools are a little bit more wary, they’re a little bit more cautious about just adopting for the sake of adopting.

That saId, you know, the hype is real and there have been large systems, including the California State University system that have adopted AI without a very clear understanding of how it was going to be implemented or a clear understanding of how professors and students were going to be trained on its usage.

So all of that’s just to say even very large systems of susceptible to the pressure to adopt and to implement and to, and to roll out. So I guess my only commentary on that would just be , this is a tool. It’s a tool that can be very powerful, right. It’s a tool that I use every day and that I think a lot of other professionals use every day.

Teachers are professionals and teachers. As professionals can use and benefit from technology, and I think that the conversation really needs to be about how do technological tools, including AI benefits and augment and support the mission, vision, values, and outcomes associated with any educational institution, and if tools like AI can help further learning outcomes, social emotional outcomes, community-based outcomes, or other priorities that schools, districts, teachers, parents, students themselves have, then this is probably an investment that schools should be looking at. And if the answer is that this is not a priority for a school or a community, then that’s okay too, right.

But this is not, like any other thing, something to adopt just for the sake of adopting. Because if anything is abundantly clear in education, adopting something and trying to implement in a top-down manner is just a recipe for failed implementation. And I think that’s why in the Common Sense Media AI toolkit for school districts that we put together, the basis of that approach is in community engagement and starting to talk to all the stakeholders in the community about if this is a fit, if this is something that addresses needs, that exist in the community, and that stakeholder engagement is really a critical part of, of any a successful implementation effort.

[00:48:36] Jon M: What role can school boards and school administrators or state departments of education play to help prevent AI tools from becoming “invisible influencers,” since obviously it’s difficult for an individual teacher to do that?

[00:48:52] Robbie T: The most critical role that administrators and school boards and state boards can play is to provide clarity to school staff and school administrators about AI and how it can be used. You know, one of the things that we’ve been tracking and that others have been tracking is the extent to which schools have actually implemented AI policies. And here I’m talking about acceptable use policies that clearly tell teachers and kids whether they’re allowed to use AI. I think RAND has the most recent data on this. I think, as of the end of last year, 88% of schools had not adopted an acceptable use policy. And that’s a recipe for confusion that matches the number that I shared earlier in this conversation where 80% of parents shared that in the Common Sense Media’s Dawn of AI Nationally Representative Survey shared that they had not heard from their school about whether their child was allowed to use AI. So much of the conversation has been about cheating and AI misuse. There was cheating before there was generative AI. And yes, there has been cheating with generative AI also. And that is definitely a problem. And some of what we can do to mitigate that is to tell kids very clearly when they can use AI, when they can’t use AI, and why. And in some of the schools that we have seen that have given kids the most clarity around this, there are school-wide systems that explain to kids when they can’t use AI when they can’t and why.

So for example, there’s a school network in Arizona where they have rolled out a simple tool, but it’s, it’s called the AI Stoplight. So, they have, uh every single assignment is coded with a green light, a yellow light, or a red light. Uh, green light assignments are like AI use is not only allowed but encouraged. It’s an assignment that is designed to be used with AI. And that’s because this school district believes that the use of AI is really important for their workforce readiness goals, and they’ve designed a lot of assignments so that kids can complete them with AI. Yellow light assignments have certain restrictions on the use of AI. So you can use AI for certain tasks, but you may need to do all the writing yourself because this is for an English class, and the skill that you’re practicing is writing, and you’re not to do any of the writing with AI. And students understand that, and they’re going to use the track changes function in Google Docs to make sure that you’re not doing the writing with AI. Then the red coded assignments are, you complete them all yourself and the students understand why, right. And if you use AI, that’s a violation of the academic integrity policy. And students understand what’s going to happen if that happens. And this is just a case where the clarity, teachers report, has gone a long way towards diminishing confusion with kids about when they can and can’t use AI. You know, I just saw a piece this morning that Victor Lee, a professor at Stanford, had published in Vox about cheating with Gen AI. And I think it was kind of similar, which is there was cheating before Gen AI. There’s cheating now. And lack of clarity is still a major barrier. So school policies sounds so boring, sounds so , why do we need them? But we need school policies to help tell teachers when they can use AI, what they can use AI for, how they need to provide oversight, the types of training that they need to receive. And then for students, the same: when they are allowed to use it, when they aren’t the type of training that they need to receive. Those are all pieces that are going to be critical for the use of this technology. People can’t use tech if they aren’t set up for success, and that requires clarity and that requires training. And these entities that exist at the more administrative level do have an important function to play in providing that clarity and providing that support.

[00:52:43] Amy H-L: Does Common Sense Media support any local or state legislation to protect students from AI risks?

[00:52:54] Robbie T: Yes, we do. We have a number of bills that we are working to get across the finish line in California right now, including AB 1064, the Lead for AI Act that would regulate high risk uses of AI. The lead author on that bill is Assembly Member Rebecca Bauer-Kahan. And this would prevent the use of AI for, among other things, companionship, which we define as including emotional support, and mental health advice, which has been in the news a lot lately with, you know, of course, the tragic case of, of Adam Raine, whose family is suing Open AI as he committed suicide in a way that appears to have been supported by ChatGPT.

Look, there are certain uses of the technology that are just not a fit for kids right now that are unacceptably risky for kids. And we’ve talked a lot about teacher use of AI at this particular point in time, but parents should not have to have the responsibility of protecting their kids from some of these unsafe use cases. There’s other types of things that are in AB 1064, including certain types of biometric data collection, et cetera. But at minimum, we do believe that there are certain types of AI use for kids that are just too risky. We’ve also passed similar legislation in New York and have other types of things that we are considering pursuing moving forward.

But the legislative work is an important part of how we approach this as well, because we do believe that systemic change matters. We’ve also passed legislation that mandates AI literacy, but that work is slow. Adding AI literacy to curriculum frameworks takes time by the time that that is fully implemented.

In some cases, you know, AI adoption is going to be a lot further down the path, and that’s why we’ve also done things like release our updated digital literacy and wellbeing lessons, which include new AI literacy lessons and are available for free on our website right now for educators to access, to engage their students in thinking about things, misinformation and disinformation and responsible technology use, and using technology in ways that are aligned with their values. These are all things that are part of what we think of as a whole community approach, a whole child approach, to thinking about responsible technology use, because it’s not just one thing, and legislation is definitely part of it.

[00:55:18] Jon M: So, you mentioned the California bills, which sound really important, and we’d be glad to link to those, but how does that work, if you’re talking about a state level regulation with international corporations? I mean, how does the state say, for example, with the companionship bill, how does that work?

[00:55:37] Robbie T: That’s a great question. In an ideal world, I think we would pass some federal legislation, and we have historically worked on very close to passing some federal legislation, but the simple truth of the matter is that the federal government hasn’t passed meaningful privacy or kids’ legislation related to technology in a very, very long time. And in the absence of the federal government stepping up and doing that work, we have taken action to pass legislation in large markets, California and New York, or the European markets. And, you know, the tech companies do not like fragmented regulations. It’s very difficult for them to comply with 50 different laws or to think about how they’re going to split apart compliance. You know, it’s definitely a little bit of a challenge there, and we appreciate that challenge.

But one of the ways that that plays out, Jon, is that sometimes the law that gets passed in a large market, California, becomes the law of the land. You can think about California emission standards, for example, are more stringent than emission standards in other states. It has been easier for U.S. lawmakers to comply with those in the US market than to have a different set of emission standards just for California. And I think we’ve seen historically a similar approach in technology. Where when we can get tech regulation passed in California or New York, it’s been easier for the industry to comply with those regulations across the US market than to just comply in California and do something else differently in the other states.

That saId, you know, we may be at a moment where there’s increasing recognition that there are harms that our kids are experiencing with regard to this new wave of technology. There are potential federal hearings that are being held related to risks of AI, chatbots, and and kids. And, you know, that’s promising. But we’ve got to see if there’s action that’s going to be taken on this front. We’ve all lived through the last generation of social media harms and social media impacts on kids. And I think that as a collective, maybe we’re at a place where we recognize that we don’t want another generation of kids to be guinea pigs for another wave of technology. And I’m cautiously optimistic that things may play out differently this time, but time will tell and we’ll see how things play out.

Again, I am cautiously optimistic as well, though, that this technology will have benefits for kids and for teachers. There are, I think, some good opportunities for learning and for learning access and for equity with AI. But for that to happen, some of these tools need to be designed a little bit differently and deployed a little bit differently. Yeah, we’re optimistic and we’re also pushing for guardrails to be able to say to kids and families in K-12 schools, yes, these tools are ready for, for prime time in schools.

[00:58:31] Amy H-L: What else should we be discussing at the intersection of AI and ethics?

[00:58:38] Robbie T: I think it’s just important to recognize, I mentioned earlier, that AI is an intensifier. In some ways, AI is just showing up in more kid domains, it’s showing up in social media apps. It’s increasingly showing up in video games. It’s starting to show up in toys, for little kids. So, I think it’s just really important for us to recognize that this is a general-purpose technology that has lots of different applications. And this is not just about K-12. We’ve been talking about K-12 specifically here today. But this is really about a technology that is going to be showing up in all of the different places that we’ve been talking about tech and kids for a long time. This is going to be a place where we start to see AI show up with streaming and short form video content. And indeed, kids are already starting to consume AI-generated content on streaming platforms. You name a place where technology and kids have been in the same sentence in the past 15, 20 years. I think we’re starting to see AI show up in those places. So I think that a lot of the ethical conversations there about what’s the nature of childhood, what’s the nature of play, what’s the nature of human relationships, is it okay for those relationships to be supplanted or replaced, in some cases, by relationships with non-human entities? Do we think that matters or not? Is it okay for young kids to have connections, interactions with chatbots or not?

And again, in parallel to the conversation we’ve been having about schools, I think it’s just really critical for parents to come together and have those conversations together so that it’s not just me as a parent thinking about, well, what does this mean for my 4-year-old and my 1-year-old? But for us to collectively start to have some of those conversations together and say, well, you know, as a town, as a society, as a collective, what do we want for our children? I think that those are the types of conversations that are really at stake right now.

And I guess I’ll just loop back to the piece that I saId earlier. This is a conversation about values. This is a conversation about the values that are related in technology. And this is a place for us to set those values and to say, for example, we want AI to be augmenting and supporting human work and not replacing human work, for example. That’s a very common values conversation that people are having as we think about impacts of AI on the job market.

I think there’s other types of values that or values, conversations, that, that can come out when we think about AI, kids and teens. But we’re going to have to be proactive about it because otherwise deployment is just going to happen and it’s going to have a bunch of unintended consequences. And we’re starting to see just the very beginnings of that as kids are starting to get hurt, unfortunately, as the tech is rolling out.

[01:01:33] Amy H-L: Thank you so much, Robbie Torney of Common Sense Media.

[01:01:37] Robbie T: Thank you for having me.

[01:01:38] Jon M: And thank you, listeners. If you found this podcast worthwhile, please share it with your friends and colleagues. Subscribe wherever you get your podcasts and give us a rating or review. This helps others to find the show. Check out our website, ethicalschools.org, for more episodes and videos and to subscribe to our email newsletters. We post annotated transcripts of our interviews to make them easy to use in workshops or classes. Contact us at hosts@ethicalschools.org. We’re on Bluesky, Facebook, Instagram, Threads, and LinkedIn. Our editor and social media manager is Amanda Denti. Until next week.

Click here to listen to this episode.