© 2024 KUAF
NPR Affiliate since 1985
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Study finds new technology has unintended risks for students, schools

courtesy
/
Center for Technology & Democracy

As new technology and artificial intelligence make their way into the classroom, schools around the country are scrambling to control how students are using programs like ChatGPT. A new report from the Center for Technology and Democracy dispels some of the myths around how students are using this technology and raises alarms about the unintended risks. Elizabeth Laird is director of equity in civic technology and co-authored the report. She spoke with Ozarks at Large's Daniel Caruth, this is an edited transcript of their conversation.

Elizabeth Laird: So what we found in our research is that schools are using an increasingly broad array of technologies to help students learn and keep them safe. But unfortunately, we're also seeing that some of them are interfering with student's education and even more are failing to keep them safe. Almost every teacher we surveyed said that their school uses software that's supposed to restrict access to harmful online content. However, three quarters of students say that this filtering and blocking have gone too far and actually prevents them from completing school assignments. And teachers agree that this is an issue. And finally, the technology that needs no introduction, generative AI, half of teachers say that they know of a student who'd gotten in trouble for using it, when only a quarter of them have received any guidance from their school on how to respond if they suspect a student of cheating.

Daniel Caruth: And it's interesting, you know, a lot of this technology, I think, you know, we would assume could help, like bridge some of those educational gaps or divides, and maybe make learning more accessible or easier. I mean, does this study find any advantages to this technology? And maybe when does it start to become a problem?

Laird: What I would say I think our goal in this research is to try and identify where the risks are, so that all of the benefits that you've just started to talk about can be realized. And so I'll offer a couple of places that schools would benefit from focusing more on privacy and equity concerns that we saw come through across all different types of technologies that are being used in schools. The first is that schools are not providing adequate resources or transparency to any of the stakeholder groups we surveyed.

And so the example I'll offer is that, alongside conversations about banning books in school libraries, a third of teachers say that their school is more likely to ban content associated with LGBTQ plus students and students of color, when only 27% of parents have ever been asked for their input of what content is restricted. And so what this amounts to is a digital version of what's happening in schools in terms of banning content, but with even less visibility from parents and the public. The other risks that I would offer that school leaders would benefit from focusing on more is that the technology that they're using to keep students safe is actually endangering them. So we found this year, unfortunately, that LGBTQ plus students are being outed due to student activity monitoring. And that's actually up six percentage points from last year. So one in five students know someone who has been outed because of this technology.

Caruth: Wow. And so can you talk a little bit about that, you know, the technology like the filtering, the blocking, content monitoring - how are schools using these technologies, and where's the guidance for that coming from?

Laird: So the origin of both of these technologies, one is rooted in the early 2000s, when there was a law passed that required them to prevent access to content that might be harmful to a minor. So historically, that's meant adult explicit online content. And that is what what schools are doing. However, they have gone far beyond that. And as I mentioned, they are using filtering to at times actually prevent students from just doing their schoolwork. And other times use more subjective values that they don't think students should have access to it parents have no say in what that looks like. And there was monitoring technology, the origin of that technology was really during COVID, when schools moved from the classroom to the home, and schools wanted to maintain greater visibility into what students were doing online. But now that we have this research, my hope is that school leaders can look at how these technologies are being used, and try and better understand the negative effects that they're having, but especially the negative effects they're having on students by race, by sex and by disability. And if those groups of students sound familiar, it's because civil rights protections have existed to prevent discrimination against them for decades. And so I'd like to see education leaders really take stock of how they're using data and technology to understand if these groups are being harmed more than their peers, even if it's not on purpose, and use the existing staff and policies and practices that they already had to protect students in person and apply those to ensure they are just as safe online.

Caruth: And so as far as remedies for this go, what are some actions that either educators, people in the classroom, students or policymakers - what are some action items that they can take to mitigate these risks?

Laird: So I would like to talk about the role of parents. So they are oftentimes missing from conversations about, unfortunately the way that technology is used in schools. I want them to know that they're not alone. And that we actually saw concerns among parents increased by 12 percentage points over last year. And this could be in part due to more of them being notified that their school has had a data breach. So one in five parents say that they have received notification that their child's information has been breached by the school, which is likely contributing to increased worries. The other thing that I hope parents hear is that they should not bear this burden alone. And it really falls to schools, and they need to do more to protect their children, when they are entrusted to go to their schools. And so with parents, I would love for them to engage with their schools and ask questions like, How prepared are they for a data breach? What type of content is filtered or blocked and why? Who gets to next city monitoring alerts outside of school hours? I mentioned earlier that law enforcement can be involved in this. And then finally, as we enter the new school year, and we know that generative AI is not going away, what type of guidance has the school provided to teachers about how students can ujet use generative AI? And how should they respond if they think they're using it in ways that aren't permitted?

Caruth: Yeah, and I want to talk a little bit about AI technology and you know, I'm in Arkansas, which is more of a rural state, we have maybe less access to some technologies than a lot of other places. But how is AI being used? Is that in the classrooms today? How is it being used? And are there rules or restrictions around it?

Laird: AI is not just coming to schools, it's here. And so specifically with generative AI, 50 percent of students say that they've used generative AI in the past year. And if a student has a disability, they're even more likely to use it. So 72% of students with disability say that they've used this technology in the past year. And at the same time, to your point about what type of guidance our school is providing. A third of teachers reported that their school has banned generative AI and fewer than half they received any substantive training on this technology. So Arkansas is certainly not alone in grappling with what is the role of AI in the classroom moving forward. And one specific risk that I would call out that would benefit from more attention is the way that the use of generative AI is using to mistrust between teachers and students. So 62 percent of teachers agreed with the statement that generative AI has made them more distrustful of whether their student's work is actually theirs. When we asked students only 19 percent of students said that they've submitted a paper that was generated by this technology. And not only that, half of teachers agree that students that use school provided devices are more likely to get in trouble or face negative consequences for using generative AI. So there appears to be quite a big gap between what teachers think is happening and what students are actually doing. And schools have not provided guidance to teachers to help them navigate and create a learning environment in which technology use is supported and fair and equitable and doesn't lead to this kind of adversarial relationship.

Stay Connected
Daniel Caruth is KUAF's Morning Edition host and reporter for Ozarks at Large<i>.</i>
Related Content