cheating – 鶹Ʒ America's Education News Source Fri, 19 Sep 2025 17:52:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png cheating – 鶹Ʒ 32 32 Another AI Side Effect: Erosion of Student-Teacher Trust /article/another-ai-side-effect-erosion-of-student-teacher-trust/ Mon, 22 Sep 2025 10:30:00 +0000 /?post_type=article&p=1020954 William Liang was sitting in chemistry class one day last spring, listening to a teacher deliver a lecture on “responsible AI use,” when he suddenly realized what his teachers are up against.

The talk was about a big, take-home essay, and Liang, then a sophomore at a Bay Area high school, recalled that it covered the basics: the rubric for grading as well as suggestions for how to use generative AI to keep students honest: They should use it as a “thinking partner” and brainstorming tool.

As he listened, Liang glanced around the classroom and saw that several classmates, laptops open, had already leaped ahead several steps, generating entire drafts of their essays.

Liang said his generation doesn’t engage in moral hand-wringing about AI. “For us, it’s simply a tool that enables us not to have to think for ourselves.”

For us, it’s simply a tool that enables us not to have to think for ourselves.

William Liang, student

But with AI’s awesome power comes a side effect that many would rather not consider: It’s killing the trust between teachers and students. 

When students can cheaply and easily outsource their work, he said, why value a teacher’s feedback? And when teachers, relying on sometimes unreliable AI-detection software, believe their students are taking such major shortcuts, the relationship erodes further.

It’s an issue that researchers are just beginning to study, with results that suggest an imminent shakeup in student-teacher relationships: AI, they say, is forcing teachers to rethink how they think about students, assessments and, to a larger extent, learning itself. 

If you ask Liang, now a junior and an experienced — he has penned pieces for The Hill, The San Diego Union-Tribune, and the conservative Daily Wire — AI has already made school more transactional, stripping many students of their desire to learn in favor of simply completing assignments. 

“The incentive system for students is to just get points,” he said in an interview. 

While much of the attention of the past few years has focused on how teachers can detect AI-generated work and put a stop to it, a few researchers are beginning to look at how AI affects student-teacher relationships.

Researcher Jiahui Luo of the Education University of Hong Kong that college students in many cases resent the lack of “two-way transparency” around AI. While they’re required to declare their AI use and even submit chat records in a few cases, Luo wrote, the same level of transparency “is often not observed from the teachers.” That produces a “low-trust environment,” where students feel unsafe to freely explore AI.

In 2024, after being asked by colleagues at Drexel University to help resolve an AI cheating case, researcher , who teaches in the university’s , analyzed college students’ , spanning December 2022 to June 2023, shortly after Open AI unleashed ChatGPT onto the world. He found that many students were beginning to feel the technology was testing the trust they felt from instructors, in many cases eroding it — even if they didn’t rely on AI.

While many students said instructors trusted them and would offer them the benefit of the doubt in suspected cases of AI cheating, others were surprised when they were accused nonetheless. That damaged the trust relationship.

For many, it meant they’d have to work on future assignments “defensively,” Gorichanaz wrote, anticipating cheating accusations. One student even suggested, “Screen recording is a good idea, since the teacher probably won’t have as much trust from now on.” Another complained that their instructor now implicitly trusted AI plagiarism detectors “more than she trusts us.”

It's creating this situation of mutual distrust and suspicion, and it makes nobody like each other.

Tim Gorichanaz, Drexel University

In an interview, Gorichanaz said instructors’ trust in AI detectors is a big problem. “That’s the tool that we’re being told is effective, and yet it’s creating this situation of mutual distrust and suspicion, and it makes nobody like each other. It’s like, ‘This is not a good environment.’”

For Gorichanaz, the biggest problem is that AI detectors simply aren’t that reliable — for one thing, they are more likely to flag the papers of English language learners as being written by AI, he said. In one Stanford University , they “consistently” misclassified non-native English writing samples as AI-generated, while accurately identifying the provenance of writing samples by native English speakers.

“We know that there are these kinds of biases in the AI detectors,” Gorichanaz said. That potentially puts “a seed of doubt” in the instructor’s mind, when they should simply be using other ways to guide students’ writing. “So I think it’s worse than just not using them at all.” 

‘It is an enormous wedge in the relationship’

Liz Shulman, an English teacher at Evanston Township High School near Chicago, recently had an experience similar to Liang’s: One of her students covertly relied on AI to help write an essay on Romeo and Juliet, but forgot to delete part of the prompt he’d used. Next to the essay’s title were the words, “Make it sound like an average ninth-grader.”

Asked about it, the student simply shrugged, Shulman recalled in she co-authored with Liang.

In an interview, Shulman said that just three weeks into the new school year, in late August, she had already had to sit down with another student who used AI for an assignment. “I pretty much have to assume that students are going to use it,” she said. “It is an enormous wedge in the relationship, which is so important to build, especially this time of the year.”

It is an enormous wedge in the relationship, which is so important to build.

Liz Shulman, English teacher

Her take: School has transformed since 2020’s long COVID lockdowns, with students recalibrating their expectations. It’s less relational, she said, and “much more transactional.” 

During lockdowns, she said, Google “infiltrated every classroom in America — it was how we pushed out documents to students.” Five years later, if students miss a class because of illness, their “instinct” now is simply to check , the widely used management tool, “rather than coming to me and say, ‘Hey, I was sick. What did we do?’”

That’s a bitter pill for an English teacher who aspires to shift students’ worldviews and beliefs — and who relies heavily on in-class discussions.

“That’s not something you can push out on a Google doc,” Shulman said. “That takes place in the classroom.”

In a sense, she said, AI is contracting where learning can reliably take place: If students can simply turn off their thinking at home and rely on AI tools to complete assignments, that leaves the classroom as the sole place where learning occurs. 

“Because of AI, are we only going to ‘do school’ while we’re in school?” she asked. 

‘We forget all the stuff we learned before’

Accounts of teachers resigned to students cheating with AI are “concerning” and stand in contrast to what a solid body of research says about the importance of teacher agency, said , senior vice president for Innovation and Impact at the Carnegie Foundation.

Teachers, she said, “are not just in a classroom delivering instruction — they’re part of a community. Really wonderful school and system leaders recognize that, and they involve them. They’re engaged in decision making. They have that agency.”

One of the main principles of Carnegie’s , a blueprint for improving secondary education, includes a “culture of trust,” suggesting that schools nurture supportive learning and “positive relationships” for students and educators.

“Education is a deeply social process,” Stafford-Brizard said. “Teaching and learning are social, and schools are social, and so everyone contributing to those can rely on that science of relational trust, the science of relationships. We can pull from that as intentionally as we pull from the science of reading.”

Education is a deeply social process. Teaching and learning are social, and schools are social.

Brooke Stafford-Brizard, Carnegie Foundation

Gorichanaz, the Drexel scholar, said that for all of its newness, generative AI presents educators with what’s really an old challenge: How to understand and prevent cheating. 

“We have this tendency to think AI changed the entire world, and everything’s different and revolutionized and so on,” he said. “But it’s just another step. We forget all the stuff we learned before.”

Specifically, research going back identifies four key reasons why students cheat: They don’t understand the relevance of an assignment to their life, they’re under time pressure, or intimidated by its high stakes, or they don’t feel equipped to succeed.

Even in the age of AI, said Gorichanaz, teachers can lessen the allure of taking shortcuts by solving for these conditions — figuring out, for instance, how to intrinsically motivate students to study by helping them connect with the material for its own sake. They can also help students see how an assignment will help them succeed in a future career. And they can design courses that prioritize deeper learning and competence. 

To alleviate testing pressure, teachers can make assignments more low-stakes and break them up into smaller pieces. They can also give students more opportunities in the classroom to practice the skills and review the knowledge being tested.

And teachers should talk openly about academic honesty and the ethics of cheating.

“I’ve found in my own teaching that if you approach your assignments in that way, then you don’t always have to be the police,” he said. Students are “more incentivized, just by the system, to not cheat.”

With writing, teachers can ask students to submit smaller “checkpoint” assignments, such as outlines and handwritten notes and drafts that classmates can review and comment on. They can also rely more on oral exams and handwritten blue book assignments. 

Shulman, the Chicago-area English teacher, said she and her colleagues are not only moving back to blue books, but to doing “a lot more on paper than we ever used to.” They’re asking students to close their laptops in class and assigning less work to be completed outside of class. 

As for Liang, the high school junior, he said his new English teacher expects all assignments to come in hand-written. But he also noted that a few teachers have fallen under the spell of ChatGPT themselves, using it for class presentations. As one teacher last spring clicked through a slide show, he said, “It was glaringly obvious, because all kids are AI experts, and they can just instantly sniff it out.” 

He added, “There was a palpable feeling of distrust in the room.”

]]>
Students Increasingly Rely on Chatbots, but at What Cost? /article/students-increasingly-rely-on-chatbots-but-at-what-cost/ Sat, 02 Aug 2025 16:30:00 +0000 /?post_type=article&p=1018929 This article was originally published in

Students don’t have the same incentives to talk to their professors — or even their classmates — anymore. Chatbots like ChatGPT, Gemini and Claude have given them a new path to self-sufficiency. Instead of asking a professor for help on a paper topic, students can go to a chatbot. Instead of forming a study group, students can ask AI for help. These chatbots give them quick responses, on their own timeline.


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


For students juggling school, work and family responsibilities, that ease can seem like a lifesaver. And maybe turning to a chatbot for homework help here and there isn’t such a big deal in isolation. But every time a student decides to ask a question of a chatbot instead of a professor or peer or tutor, that’s one fewer opportunity to build or strengthen a relationship, and the human connections students make on campus are among the most important benefits of college.

Julia Freeland-Fisher studies how technology can help or hinder student success at the . She said the consequences of turning to chatbots for help can compound.

“Over time, that means students have fewer and fewer people in their corner who can help them in other moments of struggle, who can help them in ways a bot might not be capable of,” she said.

As colleges further embed ChatGPT and other chatbots into campus life, Freeland-Fisher warns lost relationships may become a devastating unintended consequence.

Asking for help

Christian Alba said he has never turned in an AI-written assignment. Alba, 20, attends College of the Canyons, a large community college north of Los Angeles, where he is studying business and history. And while he hasn’t asked ChatGPT to write any papers for him, he has turned to the technology when a blank page and a blinking cursor seemed overwhelming. He has asked for an outline. He has asked for ideas to get him started on an introduction. He has asked for advice about what to prioritize first.

“It’s kind of hard to just start something fresh off your mind,” Alba said. “I won’t lie. It’s a helpful tool.” Alba has wondered, though, whether turning to ChatGPT with these sorts of questions represents an overreliance on AI. But Alba, like many others in higher education, worries primarily about AI use as it relates to academic integrity, not social capital. And that’s a problem.

Jean Rhodes, a psychology professor at the University of Massachusetts Boston, has spent decades studying the way college students seek help on campus and how the relationships formed during those interactions end up benefitting the students long-term. Rhodes doesn’t begrudge students integrating chatbots into their workflows, as many of their professors have, but she worries that students will get inferior answers to even simple-sounding questions, like, “how do I change my major?”

A chatbot might point a student to the registrar’s office, Rhodes said, but had a student asked the question of an advisor, that person may have asked important follow-up questions — why the student wants the change, for example, which could lead to a deeper conversation about a student’s goals and roadblocks.

“We understand the broader context of students’ lives,” Rhodes said. “They’re smart but they’re not wise, these tools.”

Rhodes and one of her former doctoral students, Sarah Schwartz, created a program called Connected Scholars to help students understand why it’s valuable to talk to professors and have mentors. The program helped them hone their networking skills and understand what people get out of their networks over the course of their lives — namely, social capital.

Connected Scholars is offered as a semester-long course at U Mass Boston, and a forthcoming paper examines outcomes over the last decade, finding students who take the course are three times more likely to graduate. Over time, Rhodes and her colleagues discovered that the key to the program’s success is getting students past an aversion to asking others for help.

Students will make a plethora of excuses to avoid asking for help, Rhodes said, ticking off a list of them: “‘I don’t want to stand out,’ ‘I don’t want people to realize I don’t fit in here,’ ‘My culture values independence,’ ‘I shouldn’t reach out,’ ‘I’ll get anxious,’ ‘This person won’t respond.’ If you can get past that and get them to recognize the value of reaching out, it’s pretty amazing what happens.”

Connections are key

Seeking human help doesn’t only leave students with the resolution to a single problem, it gives them a connection to another person. And that person, down the line could become a friend, a mentor or a business partner — a “strong tie,” as social scientists describe their centrality to a person’s network. They could also become a “weak tie” who a student may not see often, but could, importantly, still offer or crucial one day.

Daniel Chambliss, a retired sociologist from Hamilton College, emphasized the value of relationships in his 2014 book, “How College Works,” co-authored with Christopher Takacs. Over the course of their research, the pair found that the key to a successful college experience boiled down to relationships, specifically two or three close friends and one or two trusted adults. Hamilton College goes out of its way to make sure students can form those relationships, structuring work-study to get students into campus offices and around faculty and staff, making room for students of varying athletic abilities on sports teams, and more.

Chambliss worries that AI-driven chatbots make it too easy to avoid interactions that can lead to important relationships. “We’re suffering epidemic levels of loneliness in America,” he said. “It’s a really major problem, historically speaking. It’s very unusual, and it’s profoundly bad for people.”

As students increasingly turn to artificial intelligence for help and even casual conversation, Chambliss predicted it will make people even more isolated: “It’s one more place where they won’t have a personal relationship.”

In fact, by researchers at the MIT Media Lab and OpenAI found that the most frequent users of ChatGPT — power users — were more likely to be lonely and isolated from human interaction.

“What scares me about that is that Big Tech would like all of us to be power users,” said Freeland-Fisher. “That’s in the fabric of the business model of a technology company.”

Yesenia Pacheco is preparing to re-enroll in Long Beach City College for her final semester after more than a year off. Last time she was on campus, ChatGPT existed, but it wasn’t widely used. Now she knows she’s returning to a college where ChatGPT is deeply embedded in students’ as well as faculty and staff’s lives, but Pacheco expects she’ll go back to her old habits — going to her professors’ office hours and sticking around after class to ask them questions. She sees the value.

She understands why others might not. Today’s high schoolers, she has noticed, are not used to talking to adults or building mentor-style relationships. At 24, she knows why they matter.

“A chatbot,” she said, “isn’t going to give you a letter of recommendation.”

This article was and was republished under the license.

]]>
From English to Automotive Class, Teachers Assign Projects to Combat AI Cheating /article/from-english-to-automotive-class-teachers-assign-projects-to-combat-ai-cheating/ Fri, 13 Jun 2025 14:30:00 +0000 /?post_type=article&p=1016862 This article was originally published in

Kids aren’t as sneaky as they think they are. 

They do try, as Holly Distefano has seen in her middle school English language arts classes. When she poses a question to her seventh graders over her school’s learning platform and watches the live responses roll in, there are times when too many are suspiciously similar. That’s when she knows students are using an artificial intelligence tool to write an answer. 

“I really think that they have become so accustomed to it, they lack confidence in their own writing,” Distefano, who teaches in Texas, says. “In addition to just so much pressure on them to be successful, to get good grades, really a lot is expected of them.”


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


Distefano is sympathetic — but still expects better from her students. 

“I’ve shown them examples of what AI is — it’s not real,” she says. “It’s like margarine to me.”

Educators have been trying to curb the use of AI-assisted cheating since ChatGPT exploded onto the scene. 

It’s a formidable challenge. For instance, there’s a  reserved for tech influencers who rack up thousands of views and likes teaching students how to most effectively use AI programs to generate their essays, including step-by-step instructions on bypassing AI detectors. And the search term for software that purports to “humanize” AI-generated content spiked in the fall, , only to fall sharply before hitting the peak of its popularity around the end of April.

While the overall proportion of students who say they’ve cheated , students also say . 

But there may be a solution on the horizon, one that will help ensure students have to put more effort into their schoolwork than entering a prompt into a large language model.

Teachers are transitioning away from question-and-answer assignments or straightforward essays — in favor of projects. 

It’s not especially high-tech or even particularly ingenious. Yet proponents say it’s a strategy that pushes students to focus on problem-solving while instructing them on how to use AI ethically. 

Becoming ‘AI-Proof’

During this past school year, Distefano says her students’ use of AI to cheat on their assignments has reached new heights. She’s spent more time coming up with ways to stop or slow their ability to plug questions and assignments into an AI generator, including by giving out hard copy work. 

It used to mainly be a problem with take-home assignments, but Distefano has increasingly seen students use AI during class. Kids have long been astute at getting around whatever firewalls schools put on computers, and their desire to circumvent AI blockers is no different. 

Between schoolwork, sports, clubs and everything else middle schoolers are juggling, Distefano can see why they’re tempted by the allure of a shortcut. But she worries about what her students are missing out on when they avoid the struggle that comes with learning to write. 

“To get a student to write is challenging, but the more we do it, the better we get.” she says. “But if we’re bypassing that step, we’re never going to get that confidence. The downfall is they’re not getting that experience, not getting that feeling of, ‘This is something I did.’” 

Distefano is not alone in trying to beat back the onslaught of AI cheating. Blue books, which college students use to complete exams by hand, have had a  as professors try to eliminate the risk of AI intervention, reports The Wall Street Journal. 

Richard Savage, the superintendent of California Online Public Schools, says AI cheating is not a major issue among his district’s students. But Savage says it’s a simple matter for teachers to identify when students do turn to AI to complete their homework. If a student does well in class but fails their thrice-yearly “diagnostic exams,” that’s a clear sign of cheating. It would also be tough for students to fake their way through live, biweekly progress meetings with their teachers, he adds. 

Savage says educators in his district will spend the summer working on making their lesson plans “AI-proof.” 

“AI is always changing, so we’re always going to have to modify what we do,” he says. “We’re all learning this together. The key for me is not to be AI-averse, not to think of AI as the enemy, but think of it as a tool.”

‘Trick Them Into Learning’

Doing that requires teachers to work a little differently. 

Leslie Eaves, program director for project-based learning at the Southern Regional Education Board, has been devising solutions for educators like Distefano and Savage. 

Eaves authored the board’s , released earlier this year. Rather than exile AI, the report recommends that teachers use AI to enhance classroom activities that challenge students to think more deeply and critically about the problems they’re presented with. 

It also outlines what students need to become what Eaves calls “ethical and effective users” of artificial intelligence. 

“The way that happens is through creating more cognitively demanding assignments, constantly thinking in our own practice, ‘In what way am I encouraging students to think?’” she says. “We do have to be more creative in our practice, to try and do some new things to incorporate more student discourse, collaborative hands-on assignments, peer review and editing, as a way to trick them into learning because they have to read someone else’s work.”

In an English class lesson on “The Odyssey,” Eaves offers as an example, students could focus on reading and discussion, use pen and paper to sketch out the plot structure, and use AI to create an outline for an essay based on their work, before moving on to peer-editing their papers. 

Eaves says that the teachers she’s working with to take a project-based approach to their lesson plans aren’t panicking about AI but rather seem excited about the possibilities. 

And it’s not only English teachers who are looking to shift their instruction so that AI is less a tool for cheating and more a tool that helps students solve problems. She recounts that an automotive teacher realized he had to change his teaching strategy because when his students adopted AI, they “stopped thinking.” 

“So he had to reshuffle his plan so kids were re-designing an engine for use in racing, [figuring out] how to upscale an engine in a race car,” Eaves says. “AI gave you a starting point — now what can we do with it?”

When it comes to getting through to students on AI ethics, Savage says the messaging should be a combination of digital citizenship and the practical ways that using AI to cheat will stunt students’ opportunities. Students with an eye on college, for example, give up the opportunity to demonstrate their skills and hurt their competitiveness for college admissions and scholarships when they turn over their homework to AI. 

Making the shift to more project-based classrooms will be a heavy lift for educators, he says, but districts will have to change, because generative AI is here to stay. 

“The important thing is we don’t have the answers. I’m not going to pretend I do,” Savage says. “I know what we can do, when we can get there, and then it’ll probably change. The answer is having an open mind and being willing to think about the issue and change and adapt.”

]]>
AI Skeptic Creates Chatbot to Help Teachers Design Courses /article/ai-skeptic-creates-chatbot-to-help-teachers-design-courses/ Thu, 27 Mar 2025 14:30:00 +0000 /?post_type=article&p=1012561 While many educators spent the past two years fretting that artificial intelligence is killing student writing, upending person-to-person tutoring and generally wreaking havoc on scholastic inquiry, the well-known thinker and ed tech expert Michael Feldstein has been quietly exploring something completely different. 

For more than a year, he has led an with a group of about 70 educators online to build what’s essentially a chat bot with one job: to guide teachers, step-by-step, through the process of designing their own courses — a privilege previously reserved for just a few instructors at elite institutions. 


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


The experimental software, dubbed the AI Learning Design Assistant, or , has yet to hit the market. But when it does, Feldstein said, it will be free. With any luck, it could mark a new era, offering teachers at all levels an easy way to design their own homegrown coursework, assessments and even curricula at a fraction of the cost demanded by commercial publishers. Feldstein has worked primarily with college instructors, and his work is widely applicable in higher ed. But it’s got potential in K-12 education as well.

AI is interesting because there are many possible answers. That makes the question harder to answer. Nevertheless, we need to answer it.

Michael Feldstein, co-creator of ALDA

He’s pushing to democratize instructional design, a little-known academic field in which professional designers build courses by working backwards: They interview teachers to help them drill down to what’s important, then create courses based on the findings. 

When it’s ready, he said, ALDA could well shake up the teaching profession, making off-the-shelf AI behave like a personal instructional designer for virtually every teacher who wants one. 

And for the record, Feldstein said, there’s an acute shortage of such designers, so this particular iteration of AI likely won’t put anyone out of a job. 

‘What is this good for?’

Feldstein is well-known in the ed tech community, having worked over the years at Oracle, Cengage Learning and elsewhere. A one-time assistant director of the State University of New York’s Learning Network, he has more recently garnered a wide audience with his — required reading for college instructors and ed tech experts.

Over the past few years, Feldstein has likened tools such as ChatGPT and AI image generators like Midjourney to “toys in both good and bad ways.” They invite people to play and give players the ability to explore what’s basically cutting-edge AI. “It’s fun. And, like all good games, you learn by playing,” he .

But he cautions that when they’re asked to do something specific, they “tend to do weird things” such as return strange results and, on occasion, hallucinate.

As a longtime observer of ed tech, Feldstein’s approach has always been to step back and ask: What is this good for? 

“AI is interesting because there are many possible answers, and those answers change on a monthly basis as the capabilities change,” he said. That makes the question harder to answer. Nevertheless, we need to answer it.” 

ALDA’s focus, he said, has always been on helping participants think more deeply about what teachers do: The AI probes students to find out what they know, then fills in the gaps. 

“As an educator, if I ask you a question, I’m trying to understand if you know something,” he said. “So my question is directly related to a learning objective.” 

By training, teachers naturally modify their questions to help figure out if students have misconceptions. They circle around the topic, offering clues, hints and feedback to help students home in on what they know. But they don’t simply give away the answer.

Over the course of the year, he and colleagues have broken down the various aspects of their work, including what they’d outsource if they had an assistant or “junior learning designer” at their side. 

Excerpts of a conversation between an AI chatbot and a teacher who is in the process of designing a course. The open-source tool, AI Learning Design Assistant, or ALDA, is being co-developed by educator and blogger Michael Feldstein along with a small group of college instructors. (Courtesy of Michael Feldstein)

The AI starts simply, asking “Who are your students? What is your course about? What are the learning goals? What’s your teaching style?” It moves on from there: “What are the learning objectives for this lesson? How do you know when students have achieved those objectives? What are some common misconceptions they have?”

Eventually teachers can begin designing the course and its assessments with a clear focus on goals and, in the end, their own creativity. 

Feldstein holds decidedly modest goals for the project.

“The idea that we’re going to somehow invent a better AI model than these companies that are spending billions of dollars is crazy,” Feldstein said. But making course design accessible “is very doable and very useful.” 

He has intentionally brought together a diverse group of instructors that includes both heavy AI users and skeptics. Among them: Paul Wilson, a longtime professor of religion and philosophy at Shaw University in Raleigh, N.C. Though Wilson has taught there for 32 years, he has dabbled in AI over the past few years as it reared its head in classes, assignments and faculty meetings. 

He came away from Feldstein’s sessions over the past few months with the outlines of not one but two courses: a world religion survey, which he designed last summer, and a course in pastoral care. The latter, he said, is a “specialty class” for ministers-in-training who are getting their first taste of interacting with congregation members.

“They’re doing field work,” he said, “and this particular class is going to cover the functions they would have if they were serving in pastoral ministry.” 

The course will cover everything from the business of running a congregation to the teaching and counseling duties of a pastor and the “prophetic” role — preaching and teaching the Bible, shepherding the congregation and offering spiritual guidance. 

Wilson said the AI let him tweak the course design in response to test users’ suggestions. “By the end, my experience was that I was working with something valuable,” he said. He is offering the class this semester. 

“I got a very good course design, with all the parameters that I was looking for,” he said. 

Geneva Dampare, director of strategy and operations at the United Negro College Fund, said the organization invited six instructors from five HBCUs to Feldstein’s workshop. Dampare, who has an instructional design background, joined as well. 

Many faculty at these institutions, she said, don’t see AI as the menace that other instructors do. For them, it’s a kind of equalizer at colleges that don’t typically offer a perk like instructional designers. 

But by the end of the process last November, Dampare said, many instructors “could comfortably speak about AI, speak about how they are integrating the ALDA tool into the curriculum development that they’re doing for next semester or future semesters.”

]]>
Could Massachusetts AI Cheating Case Push Schools to Refocus on Learning? /article/could-massachusetts-ai-cheating-case-push-schools-to-refocus-on-learning/ Thu, 31 Oct 2024 18:48:54 +0000 /?post_type=article&p=734887 A Massachusetts family is awaiting a judge’s ruling in a federal lawsuit that could determine their son’s future. To a few observers, it could also push educators to limit the use of generative artificial intelligence in school.

To others, it’s simply a case of helicopter parents gone wild.

The case, filed last month, tackles key questions of academic integrity, the college admissions arms race and even the purpose of school in an age when students can outsource onerous tasks like thinking to a chatbot.


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


While its immediate outcome will largely serve just one family — the student’s parents want a grade changed so their son can apply early-admission to elite colleges — the case could ultimately prompt school districts nationwide to develop explicit policies on AI. 

If the district, in a prosperous community on Boston’s South Shore, is forced to change the student’s grade, that could also prompt educators to focus more clearly on the knife’s edge of AI’s promises and threats, confronting a key question: Does AI invite students to focus on completing assignments rather than actual learning?

“When it comes right down to it, what do we want students to do?” asked John Warner, a well-known and author of . “What do we want them to take away from their education beyond a credential? Because this technology really does threaten the integrity of those credentials. And that’s why you see places trying to police it.”

‘Unprepared in a technology transition’

The facts of the case seem simple enough: The parents of a senior at Hingham High School have sued the school district, saying their son was wrongly penalized as a junior for relying on AI to research and write a history project that he and a partner were assigned in Advanced Placement U.S. History. The teacher used the anti-plagiarism tool Turnitin, which flagged a draft of the essay about NBA Hall of Famer Kareem Abdul Jabbar’s civil rights activism as possibly containing AI-generated material. So she used a “revision history” tool to uncover how many edits the students had made, as well as how long they spent writing. She discovered “many large cut and paste items” in the first draft, suggesting they’d relied on outside sources for much of the text. She ran the draft through two other digital tools that also indicated it had AI-generated content and gave the boys a D on the assignment. 

From there, the narrative gets a bit murky. 

On the one hand, the complaint notes, when the student and his partner started the essay last fall, the district didn’t have a policy on using AI for such an assignment. Only later did it lay out prohibitions against AI.

The boy’s mother, Jennifer Harris, last month asked a local , “How do you know if you’re crossing a line if the line isn’t drawn?”

The pair tried to explain that using AI isn’t plagiarism, telling teachers there’s considerable debate over its use in academic assignments, but that they hadn’t tried to pass off others’ work as their own. 

For its part, the district says Hingham students are trained to know plagiarism and academic dishonesty when they see it. 

District officials declined to be interviewed, but in an affidavit, Social Studies Director Andrew Hoey said English teachers at the school regularly review proper citation and research techniques — and they set expectations for AI use.

Social studies teachers, he said, can justifiably expect that skills taught in English class “will be applied to all Social Studies classes,” including AP US History — even if they’re not laid out explicitly. 

A spokesperson for National History Day, the group that sponsored the assignment, provided 鶹Ʒ with a link to its , which say students may use AI to brainstorm topic ideas, look for resources, review their writing for grammar and punctuation and simplify the language of a source to make it more understandable.

They can’t use AI to “create elements of your project” such as writing text, creating charts, graphs, images or video. 

In March, the school’s National Honor Society faculty advisor, Karen Shaw, said the pair’s use of AI was “the most egregious” violation of academic honesty she and others had seen in 16 years, according to the lawsuit. The society rejected their applications.

Peter S. Farrell, the family’s attorney, said the district “used an elephant gun to slay a mouse,” overreacting to what’s basically a misunderstanding.

The boys’ failing grade on the assignment, as well as the accusation of cheating, kept him out of the Honor Society, the lawsuit alleges. Both penalties have limited his chances to get into top colleges on early decision, as he’d planned this fall.

The student, who goes unnamed in the lawsuit, is “a very, very bright, capable, well-rounded student athlete” with a 4.3 GPA, a “perfect” ACT score and an “almost perfect” SAT score, said Farrell. “If there were a perfect plaintiff, he’s it.” 

They knew that there was no leg to stand on in terms of the severity of that sanction.

Peter S. Farrell, attorney for student

While the boy earned a C+ in the course, he scored a perfect 5 on the AP exam last spring, according to the lawsuit. His exclusion from the Honor Society, Farrell said, “really shouldn’t sit right with anybody.”

For a public high school to take such a hard-nosed position “simply because they got caught unprepared in a technology transition” doesn’t serve anyone’s interests, Farrell said. “And it’s certainly not good for the students.”

Ultimately, the school’s own investigation found that over the past two years it had inducted into the Honor Society seven other students who had academic integrity infractions, Farrell said. The student at the center of the lawsuit was allowed to reapply and was inducted on Oct. 15.

“They knew that there was no leg to stand on in terms of the severity of that sanction,” Farrell said.

‘Districts are trying to take it seriously’

While Hingham didn’t adopt a districtwide AI policy until this school year, it’s actually ahead of the curve, said Bree Dusseault, the principal and managing director of the , a think tank at Arizona State University. Most districts have been cautious to put out formal guidance on AI.

Dusseault contributed an affidavit on behalf of the plaintiffs, laying out the fragmented state of AI uptake and guidance. She more than 1,000 superintendents last year and found that just 5% of districts had policies on AI, with another 31% promising to develop them in the future. Even among CRPE’s group of 40 “early adopter” school districts that are exploring AI and encouraging teachers to experiment with it, just 26 had published policies in place. 

They’re hesitant for a reason, she said: They’re trying to figure out what the technology’s implications are before putting rules in writing. 

“Districts are trying to take it seriously,” she said. “They’re learning the capacity of the technology, and both the opportunities and the risks it presents for learning.” But so often they’re surprised by new technological developments and capabilities that they never imagined. 

Even if they’re hesitant to commit to full-blown policies, Dusseault said, districts should consider more informal guidelines that clearly lay out for students what academic integrity, plagiarism and acceptable use are. Districts that are “totally silent” on AI run the risk of student confusion and misuse. And if a district is penalizing students for AI use, it needs to have clear policy language explaining why.

That said, a few observers believe the case boils down to little more than a cheating student and his helicopter parents.

Benjamin Riley, founder of , an AI-focused education think tank, said the episode seems like an example of clear-cut academic dishonesty. Everyone involved in the civil case, he said, especially the boy’s parents and their lawyer, “should be embarrassed. This isn’t some groundbreaking lawsuit that will help define the contours of how we use AI in education; it’s helicopter parenting run completely amok that may serve as catnip to journalists (and their editors) but does nothing to illuminate anything.”

This isn't some groundbreaking lawsuit that will help define the contours of how we use AI in education; it's helicopter parenting run completely amok.

Benjamin Riley, Cognitive Resonance

Alex Kotran, founder of , a nonprofit that offers a free AI literacy curriculum, said the honor society director’s statement about the boys’ alleged academic dishonesty makes him think “there’s clearly plenty more than what we’re hearing from the student.” While schools genuinely do need to understand the challenge of getting AI policies right, he said, “I worry that this is just a student with overbearing parents and a big check to throw lawyers at a problem.”

Others see the case as surfacing larger-scale problems: Writing in this week, Jane Rosenzweig, director of the and author of the newsletter, said the Massachusetts case is “less about AI and more about a family’s belief that one low grade will exclude their child from the future they want for him, which begins with admission to an elite college.”

That problem long predated ChatGPT, Rosenzweig wrote. But AI is putting our education system on a collision course “with a technology that enables students to bypass learning in favor of grades.”

“I feel for this student,” said Warner, the writing coach. “The thought that they need to file a lawsuit because his future is going to be derailed by this should be such an indictment of the system.”

The case underscores the need for school districts to rethink how they interact with students in the Age of AI, he said. “This stuff is here. It’s embedded in the tools students use to do their work. If you open up Microsoft Word or Google Docs or any of this stuff, it’s right there.”

What do we want them to take away from their education beyond a credential? Because this technology really does threaten the integrity of those credentials.

John Warner, writing coach

Perhaps as a result, Warner said, students have increasingly come to view school more transactionally, with assignments as a series of products rather than as an opportunity to learn and develop important skills.

“I’ve taught those students,” he said. “For the most part, those are a byproduct of disengagement, not believing [school] has anything to offer — and that the transaction can be satisfied through ‘non-work’ rather than work.”

His observations align with recent research by Dusseault’s colleagues, who that four graduating classes of high school students, or about 13.5 million students, had been affected by the pandemic, with many “struggling academically, socially, and emotionally” as they enter adulthood.

Ideally, Warner said, AI tools should offer an opportunity to refocus students to emphasize process over product. “This is a natural design for somebody who teaches writing,” he said, “because I’m obsessed with process.”Warner recalled giving a recent series of talks at , a small, alternative liberal arts college in California, where he encountered students who said they had no use for AI chatbots. They preferred to think through difficult problems themselves. “They were just like, ‘Aw, man, I don’t want to use that stuff. Why do I want to use that stuff? I’ve got thoughts.’”

]]>
‘Distrust, Detection & Discipline:’ New Data Reveals Teachers’ ChatGPT Crackdown /article/distrust-detection-discipline-new-data-reveals-teachers-chatgpt-crackdown/ Tue, 02 Apr 2024 20:01:00 +0000 /?post_type=article&p=724713 New survey data puts hard numbers behind the steep rise of ChatGPT and other generative AI chatbots in America’s classrooms — and reveals a big spike in student discipline as a result. 

As artificial intelligence tools become more common in schools, most teachers say their districts have adopted guidance and training for both educators and students, by the nonprofit Center for Democracy and Technology. What this guidance lacks, however, are clear instructions on how teachers should respond if they suspect a student used generative AI to cheat. 


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


“Though there has been positive movement, schools are still grappling with how to effectively implement generative AI in the classroom — making this a critical moment for school officials to put appropriate guardrails in place to ensure that irresponsible use of this technology by teachers and students does not become entrenched,” report co-authors Maddy Dwyer and Elizabeth Laird write.

Among the middle and high school teachers who responded to the online survey, which was conducted in November and December, 60% said their schools permit the use of generative AI for schoolwork — double the number who said the same just five months earlier on a similar survey. And while a resounding 80% of educators said they have received formal training about the tools, including on how to incorporate generative AI into assignments, just 28% said they’ve received instruction on how to respond if they suspect a student has used ChatGPT to cheat. 

That doesn’t mean, however, that students aren’t getting into trouble. Among survey respondents, 64% said they were aware of students who were disciplined or faced some form of consequences — including not receiving credit for an assignment — for using generative AI on a school assignment. That represents a 16 percentage-point increase from August. 

The tools have also affected how educators view their students, with more than half saying they’ve grown distrustful of whether their students’ work is actually theirs. 

Fighting fire with fire, a growing share of teachers say they rely on digital detection tools to sniff out students who may have used generative AI to plagiarize. Sixty-eight percent of teachers — and 76% of licensed special education teachers — said they turn to generative AI content detection tools to determine whether students’ work is actually their own. 

The findings carry significant equity concerns for students with disabilities, researchers concluded, especially in the face of are ineffective.

]]>
High School Cheating Increase from ChatGPT? Research Finds Not So Much /article/high-school-cheating-increase-from-chatgpt-research-finds-not-so-much/ Tue, 06 Feb 2024 19:30:00 +0000 /?post_type=article&p=721579 The rise of AI chatbot tools caused panic among high school teachers and administrators nationwide — but researchers say the frequency of students cheating on assignments remained “surprisingly” stagnant.

According to from Stanford University, about 60 to 70 percent of high school students surveyed in the fall of 2023 have engaged in cheating behavior — the same number prior to the debut of ChatGPT in the fall of 2022.

“I thought that we would see higher numbers in the fall so it was a little surprising to me,” said Denise Pope, a senior lecturer at Stanford’s Graduate School of Education who surveyed students across 40 high schools through an she co-founded.

Victor Lee, an associate professor at Stanford’s Graduate School of Education who helped oversee the research with Pope, said high school students are “underwhelmed” by AI chatbot tools.

“It just sounds very sterile and vanilla to them,” Lee said. “They may have heard about it, but the media a lot of kids are using are quite different than the ones adults and working professionals are attuned to.”


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


A conducted by the in the fall of 2023 found nearly one-third of students aged 13 to 17 have never heard of ChatGPT and another 44 percent have only heard “a little” about it. 

From those who were familiar with ChatGPT, the vast majority — about 81 percent — said they had not used it to help with school work.

“Many teens are using a variety of technology…[but] among those who’ve heard at least a little about ChatGPT, shares of them still aren’t sure how they feel about it,” said Colleen McClain, a research associate at the Pew Research Center.

Here are four things to know about the effects AI chatbot tools have had on high school cheating:

1. High school students who weren’t cheating before aren’t cheating now.

According to the , surveys of more than 70,000 high schools from 2002 to 2015 found about 64 percent of students cheated on a test — a similar outcome to Stanford’s findings after the rise of AI chatbot tools.

Pope said what surprises educators and parents the most is how common cheating has been.

“We know from our research that when students do cheat, it’s typically for reasons that have very little to do with their access to technology,” Pope told .

“When a student is less engaged, when they feel like they don’t belong or are not respected or valued in their community, when they’re stressed and highly sleep deprived — these are things that tend to correlate with cheating,” Pope said. 

Lee said this number will “consistently stay there unless schools engage in certain steps to be thoughtful about what climate they’re creating that motivates cheating.”

This includes tapping into the topics students are already interested in and developing useful skills based on how they naturally enjoy learning.

“A lot of the time, the AI students encounter is via Snapchat because they have a chatbot built into it,” Lee said. “And students aren’t turning to Google as their primary search, they turn to YouTube…[or] video-based searches rather than text-based.”

2. ChatGPT awareness is higher among White, wealthier and older students.

Pew found about 72 percent of white students had at least some knowledge of ChatGPT compared to 56 percent of Black students.

In addition, more than 75 percent of students in households with an annual income of $75,000 or more had some knowledge of ChatGPT compared to 41 percent of students in households with annual incomes under $30,000.

Data courtesy of the Pew Research Center. (Chart: Meghan Gallagher/鶹Ʒ)

McClain pointed to the “digital divide” as an explanation for Pew’s survey findings.

“The pattern here is quite striking,” McClain said. “It certainly speaks to the fact that not every teen is equally likely to have heard about these tools and used them.”

She added how awareness of ChatGPT was seen more in older students — particularly those in 11th and 12th grade.

“Even among those who heard at least a little about ChatGPT…[young] teens may still be figuring out how they feel about it,” McClain said.

3. High school students have adopted a “good faith” approach to AI chatbot tools.

Pew found only 20 percent of students aged 13 to 17 said ChatGPT was acceptable to write essays compared to 57 percent who said it was not.

But, nearly 70 percent said it was acceptable to research new topics compared to 13 percent who said it was not.

Data courtesy of the Pew Research Center. (Chart: Meghan Gallagher/鶹Ʒ)

The Stanford researchers found similar outcomes.

At four high schools surveyed this fall 2023, about 9 to 16 percent of students used AI chatbot tools to write essays and about 55 to 77 percent used it to generate an idea for a paper, project or assignment.

Data courtesy of Stanford’s Graduate School of Education. (Chart: Meghan Gallagher/鶹Ʒ)

“The vast majority don’t want AI to do all the work for them so they’re coming into this with sort of a good faith effort,” Lee said.

“When I’ve had conversations with educators, they sort of breathe a sigh of relief and think ‘oh okay let’s think about some of the cool things we could do’ and that’s exciting,” Lee added.

4. Prohibiting AI chatbot tools won’t solve the systemic issues of why students cheat.

For Pope, finding comfort around AI chatbot tools starts with educators and parents including their students into the conversation.

“If you’re going to come up with a classroom or home policy, you want to have the students present, speaking up and telling you what they think will be the most useful and appropriate uses of AI,” Pope said.

Lee said addressing AI chatbot tool usage in high schools is just the “tip of a much larger iceberg.”

“Part of why we get concerned is because students feel pretty disenfranchised from the boring assignments, tedious homework and essays in these weird written formats that they don’t feel will provide them any long term need or use,” Lee said.

“I don’t see us as saying AI is the best thing since sliced bread, but I also don’t think of us as saying AI is going to destroy humanity,” Lee added.

]]>
Survey: AI is Here, but Only California and Oregon Guide Schools on its Use /article/survey-ai-is-here-but-only-california-and-oregon-guide-schools-on-its-use/ Wed, 01 Nov 2023 04:01:00 +0000 /?post_type=article&p=717117 Artificial intelligence now has a daily presence in many teachers’ and students’ lives, with chatbots like ChatGPT, Khan Academy’s tutor and AI image generators like all freely available. 

But nearly a year after most of us came face-to-face with the first of these tools, a that few states are offering educators substantial guidance on how to best use AI, let alone fairly and with appropriate privacy protections.

As of mid-October, just two states, California and , offered official guidance to schools on using AI, according to the Center for Reinventing Public Education at Arizona State University. 

CRPE said 11 more states are developing guidance, but that another 21 states don’t plan to give schools guidelines on AI “in the foreseeable future.”


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


Seventeen states didn’t respond to CRPE’s survey and haven’t made official guidance publicly available.

Bree Dusseault

As more schools experiment with AI, good policies and advice — or a lack thereof — will “drive the ways adults make decisions in school,” said Bree Dusseault, CRPE’s managing director. That will ripple out, dictating whether these new tools will be used properly and equitably.

“We’re not seeing a lot of movement in states getting ahead of this,” she said. 

The reality in schools is that AI is here. Edtech companies are pitching products and schools are buying them, even if state officials are still trying to figure it all out. 

Satya Nitta

“It doesn’t surprise me,” said Satya Nitta, CEO of , a generative AI company developing voice-activated assistants for teachers. “Normally the technology is well ahead of regulators and lawmakers. So they’re probably scrambling to figure out what their standard should be.”

Nitta said a lot of educators and officials this week are likely looking “very carefully” at Monday’s on AI “to figure out what next steps are.” 

The order requires, among other things, that AI developers share safety test results with the U.S. government and develop standards that ensure AI systems are “safe, secure, and trustworthy.” 

It follows five months after the U.S. Department of Education released a detailed, with recommendations on using AI in education.

Deferring to districts

The fact that 13 states are at least in the process of helping schools figure out AI is significant. Last summer, no states offered such help, CRPE found. Officials in New York, , Rhode Island and Wyoming said decisions about many issues related to AI, such as academic integrity and blocking websites or tools, are made on the local level.

Still, researchers said, it’s significant that the majority of states still don’t plan AI-specific strategies or guidance in the 2023-24 school year.

There are a few promising developments: North Carolina will soon require high school graduates to pass a computer science course. In Virginia, Gov. Glenn Youngkin in September on AI careers. And Pennsylvania Gov. Josh Shapiro in September to create a state governing board to guide use of generative AI, including developing training programs for state employees.

Tara Nattrass

But educators need help understanding artificial intelligence, “while also trying to navigate its impact,” said Tara Nattrass, managing director of innovation strategy at the International Society for Technology in Education. “States can ensure educators have accurate and relevant guidance related to the opportunities and risks of AI so that they are able to spend less time filtering information and more time focused on their primary mission: teaching and learning.”

Beth Blumenstein, Oregon’s interim director of digital learning & well-rounded access, said AI is already being used in Oregon schools. And the state Department of Education has received requests from educators asking for support, guidance and professional development.

Beth Blumenstein

Generative AI is “a powerful tool that can support education practices and provide services to students that can greatly benefit their learning,” she said. “However, it is a highly complex tool that requires new learning, safety considerations, and human oversight.”

Three big issues she hears about are cheating, plagiarism and data privacy, including how not to run afoul of Oregon’s Student Information Protection Act or the federal Children’s Online Privacy and Protection Act. 

‘Now I have to do AI?’

In August, CRPE conducted focus groups with 18 superintendents, principals and senior administrators in five states who said they were cautiously optimistic about AI’s potential, but many complained about navigating yet another new disruption.

“We just got through this COVID hybrid remote learning,” one leader told researchers. “Now I have to do AI?”

Nitta, Merlyn Mind’s CEO, said that syncs with his experience.

“Broadly, school districts are looking for some help, some guidance: ‘Should we use ChatGPT? Should we not use it? Should we use AI? Is it private? Are they in violation of regulations?’ It’s a complex topic. It’s full of all kinds of mines and landmines.” 

And the stakes are high, he said. No educator wants to appear in a newspaper story about her school using an AI chatbot that feeds inappropriate information to students. 

“I wouldn’t go so far as to say there’s a deer-caught-in-headlights moment here,” Nitta said, “but there’s certainly a lot of concern. And I do believe it’s the responsibility of authorities, of responsible regulators, to step in and say, ‘Here’s how to use AI safely and appropriately.’ ” 

]]>
Before Trump, D.A. Fani Willis Targeted Teachers in Atlanta Cheating Scandal /article/before-trump-d-a-fani-willis-targeted-teachers-in-atlanta-cheating-scandal/ Fri, 18 Aug 2023 11:30:00 +0000 /?post_type=article&p=713554 A decade before she unleashed the sprawling case now entangling former President Donald Trump in Georgia, Fulton County District Attorney Fani Willis used similar methods to target an unlikely group: public school educators in Atlanta.

As an assistant district attorney in 2013, Willis turned heads in one of her first big cases: She helped convene a grand jury that indicted decorated Superintendent Beverly Hall and nearly three dozen other educators for cheating on state standardized tests. In the end, Willis brought a dozen cases to trial, with a jury convicting 11.

This week, Willis invoked the same statute — Georgia’s Racketeer Influenced and Corrupt Organizations, or RICO, Act — to indict Trump and 18 others in an alleged plot to overturn the state’s 2020 election results. 

In doing so, she offered a reminder of her role in a divisive chapter in the city’s recent history. While the former president that Willis is, among other things, “a rabid partisan,” the cheating prosecutions left fissures in her own community, where many say she stood up for children but others accuse her of turning her back on Black educators. 

‘Cooking the books’

Hall, the Atlanta superintendent, arrived in the district in 1999, eventually leading what she would call a data-driven turnaround. She told observers that under her tenure, Atlanta schools were “debunking the American algorithm that socio-economics predicts academic success,” The Atlanta Journal-Constitution .

By 2009, her efforts had earned her one of education’s top honors: . But the same year, the Journal-Constitution the first of several stories analyzing Atlanta’s results on the Georgia Criterion-Referenced Competency Test. The analysis found that scores had risen at rates that were statistically “all but impossible.” It also found that district officials disregarded internal irregularities and retaliated against whistleblowers. 

Critics would soon compare Hall to “a Mafia boss who demanded fealty from subordinates while perpetrating a massive, self-serving fraud,” the city newspaper reported at the time. Willis pursued Hall using the same tools many prosecutors employ against Mafia bosses and drug kingpins. In bringing charges under the state’s RICO Act, Willis alleged that Hall and her colleagues used the “legitimate enterprise” of the school system to carry out an illegitimate act: cheating.

Lonnie King, a former head of the local NAACP, the newspaper that when he looked at the data, “I thought Beverly Hall was cooking the books” as early as 2006.

The newspaper’s coverage led Gov. Sonny Perdue to appoint a team of special investigators, who conducted 2,100 interviews and reviewed 800,000 documents. By 2011, they uncovered cheating in 44 of the 56 schools they examined, concluding that 178 educators participated. Investigators eventually found widespread tampering with test papers and concluded that Hall stood at the center of “a culture of corruption.”

Special investigator Michael Bowers, a former state attorney general, in 2013 that interrogating teachers in the scheme had left him in tears.

“The thing I remember most was talking to some of the teachers who had been mistreated, mostly single moms,” he said. “And it’s heartbreaking. They told of how they had been forced to cheat.” One told him, “I had no choice.”

‘On the backs of babies’

Hall retired in 2011, but on March 29, 2013, a Fulton County grand jury indicted her and more than 30 others in what Willis called a conspiracy comprising administrators, principals, teachers and even a school secretary.

Similar to this week’s indictments, the Atlanta defendants faced charges of racketeering, conspiracy and making false statements. Hall also faced theft charges because her rising salary was tied to test scores — in 2009, the year she was named Superintendent of the Year, she got , prosecutors noted.

Former Atlanta Mayor Andrew Young, who in 2014 asked the judge in Superintendent Beverly Hall’s criminal trial to be “merciful” and drop the case. Hall died of breast cancer in 2015. (Monica Morgan/Getty Images)

If convicted, Hall could have served as many as 45 years in prison, but she soon fell ill and the judge in the case indefinitely postponed her trial. At an April 2014 hearing, Andrew Young, a former Atlanta mayor and United Nations ambassador, rose in the courtroom and asked the judge to be “merciful” and drop the case against her.

“Let God judge her,” he said.

Hall died of breast cancer in 2015, at age 68.

Public opinion on the case was sharply divided, with many Black commentators accusing Willis of overreach. But eventually, 34 of Hall’s subordinates faced criminal charges.

Brittney Cooper

Brittney Cooper, a professor of Women’s and Gender Studies and Africana Studies at Rutgers University, : “Scapegoating Black teachers for failing in a system that is designed for Black children, in particular, not to succeed is the real corruption here.”

Cooper noted that former Washington, D.C., Schools Chancellor Michelle Rhee, who is Korean-American, had also been for creating a “culture of fear about test scores.” An by USA Today revealed findings similar to Atlanta’s, but an inspector general report found of widespread cheating and Rhee never faced prosecution.

While most of the Atlanta educators eventually pleaded guilty to avoid jail time, 12 went to trial in 2014. As with the Trump case, this one was complex: Jury selection took more than , and jurors sat through complex statistical analyses of answer-sheet erasure patterns, among other matters. At a few points in the trial, a dozen or more lawyers offered different versions of events.

A demonstrator holds a sign in support of prosecutor Fani Willis outside of the Lewis R. Slaton Courthouse before this week’s indictment of former U.S. President Donald Trump in Atlanta, Georgia. (Christian Monterrosa/AFP)

In an early case that went to trial in 2013, Willis said supervisor Tamara Cotman worked to protect educators’ jobs by advising principals under investigation not to cooperate with state investigators — a charge Cotman denied — and by vowing to return high test scores at any cost.

“She did it on the backs of babies,” Willis during closing arguments. The jury acquitted Cotman, who was later convicted of other charges in the larger case.

Former President Donald Trump at the Georgia state GOP convention on June 10, 2023. Fani Willis, the prosecutor who is pursuing the Georgia election case, made a name for herself a decade ago by pursuing similar racketeering charges against Atlanta educators. (Anna Moneymaker/Getty Images)

In court, Willis told the jury of “cheating parties” at which educators got together to erase children’s incorrect answers on test sheets and pencil in correct ones. At a few of the parties, she said, educators “ate fish and grits — I can’t make this up.” 

The jury convicted 11 of the 12 of racketeering and other charges.

The Rev. Dr. Raphael G. Warnock, at the time senior pastor of Ebenezer Baptist Church — he now serves as a U.S. Senator — The New York Times, “There’s no question that this has not been our finest hour. It’s a dark chapter, but it’s just that. It’s a chapter.”

In 2015, commentators Van Jones and Mark Holden that the educators convicted in the case were “the latest victims of overcriminalization,” facing serious jail time because of Willis’s “unprecedented use” of RICO. Three were sentenced to seven years in prison, they noted, while others received one- or two-year sentences if they didn’t accept plea deals. 

“These punishments do not fit the crimes,” they wrote. 

Sen. Raphael G. Warnock, then senior pastor of the Ebenezer Baptist Church, called the cheating scandal a “dark chapter.” (Curtis Compton/Getty Images)

Since then, several of the defendants have loudly proclaimed their innocence, even as they’ve served prison time or pursued appeals to avoid it. A handful of those cases remain outstanding. In several instances, they and their defenders say they’ve spent their life savings pursuing appeals.

In 2019, Shani Robinson, one of those found guilty, about the ordeal. In an interview, , “the thought of being blamed for something that I did not do is horrifying. … I felt like if I was on the right side of justice, that one day I would be vindicated. That was the moment that I decided that I would never take a plea deal.”

But many parents saw it differently.

Shawnna Hayes-Jocelyn had three of her four children in classes at schools affected by the cheating. She said Willis rightly brought RICO charges. 

“You’d better believe she did the right thing, because that was the worst Black-on-Black crime example that could have ever happened around education,” she told 鶹Ʒ. “Because what they did to those children is that they didn’t give those children options and opportunities.”

Shawnna Hayes-Jocelyn

Hayes-Jocelyn said her mind was made up once she read the state report that alleged widespread cheating among educators. 

“When I read that report and saw what was happening in that school system, yeah, people said, ‘Oh, this is RICO. We think about RICO as organized crime.’ I said, ‘This was organized crime.’” 

Those familiar with Willis’s work say she’s tenacious. Atlanta NAACP president Gerald Griggs, one of the defense attorneys in the cheating trial, told The Guardian this week that Trump is “going to be very surprised when he’s sitting across from her for months on trial. He’ll find out how great of a lawyer she really is.”

Asked in 2021 if she had regrets about pursuing the school cheating cases, Willis was blunt, the Times that by going after teachers, principals and administrators, she was “defending poor Black children.” Public education, she said, offers these children their only chance to get ahead. “So if what I am being criticized for is doing something to protect people that did not have a voice for themselves, I sit in that criticism, and y’all can put it in my obituary.”

]]>
Texas Professors on ChatGPT: ‘Strategize, Don’t Demonize’ to Curb Academic Dishonesty /article/utep-on-chatgpt-strategize-dont-demonize-to-curtail-academic-dishonesty/ Sat, 05 Aug 2023 11:30:00 +0000 /?post_type=article&p=712642 This article was originally published in

A faculty member at the University of Texas at El Paso was grading a composition during the spring 2023 semester, and suspected that it was not the student’s work – until she got to that one sentence.

The instructor of the upper-level course with a strong writing component believed the essay was prepared, at least in part, by ChatGPT, an artificial intelligence program that launched last November. With a few prompts, users of the free AI program can produce essays, research papers, computer code and more with relative ease.

To stymie ChatGPT, the lecturer directed her students to base one of their answers on how they related to the assigned readings. The answer from the student in question did not sound like a student’s “voice.” The final confirmation was the inclusion of something like “I don’t have a personal experience because I’m AI.”


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


The student, who earned a zero for his paper, acknowledged his offense and apologized to the instructor, who did not want to be named. The student is among those who tried to cut academic corners with ChatGPT. In most cases, these indiscretions were handled at the classroom level. More serious offenses were submitted to the university’s Office of Student Conduct and Conflict Resolution, or OSCCR.

“I’d like to see more suggestions or training on how to proactively address ChatGPT with my students rather than solely acting in the role of ‘catching’ and disciplining them,” the faculty member said.

To , UTEP conducted a series of workshops late last spring to inform faculty about the pervasive use of ChatGPT and other forms of AI. UTEP’s Center for Faculty Leadership and Development organized the presentations to increase awareness and, where possible, to educate faculty on how to use AI effectively in the classroom and as an assessment tool. About 50 university instructors from throughout the university attended the presentations.

Jeffrey Olimpo, director of the faculty leadership center, said the main concern workshop participants shared with him was students’ unethical use of ChatGPT and AI in general. His response was that AI is not going away.

“We came at it from an angle of, ‘You can’t put the toothpaste back in the tube,’” Olimpo said a few weeks after the last workshop.

The event’s presenters included representatives from OSCCR and the Provost’s Office. Olimpo recalled that the OSCCR official said that his office already had seen some potential ChatGPT cases.

Strategize, don’t demonize

The Office of Student Conduct and Conflict Resolution conducted 20 investigations into possible cases of academic dishonesty tied to the use of AI during the spring 2023 semester, according to the university. UTEP did not respond to a question about how those cases were resolved and said that OSCCR director Jovita Simón would not comment on this story.

While the university was aware of ChatGPT’s potential downsides, Olimpo said there was no reason to chase it down with torches and pitchforks.

“We try to strategize and not demonize,” he said.

Arthur Ramirez, a second-year UTEP doctoral student in finance, said he began to test ChatGPT soon after it launched to learn if it could help with his research. Initially, he was concerned with its inaccuracies, but found it helpful with coding, especially with better prompts, and to understand certain charts. He said the only instructions a professor gave him was to follow the university’s guidelines.

“He said there was no right or wrong way to use ChatGPT,” Ramirez said. “Just don’t abuse it.”

Responding to an El Paso Matters Instagram request for students to share their experiences, one UTEP student said that some of his professors encouraged students to use ChatGPT, while others warned them not to use it for plagiarism.

“I don’t see what the big deal is,” wrote the student who identified himself as “sergio.iii.”

Sergio.iii called the AI program an effective study and communication tool with the right prompts. He said it helped create outlines for papers, add focus to his PowerPoint presentations and often gave more understandable explanations to complicated topics.

“The students using ChatGPT unethically aren’t even being smart about it,” he wrote via Instagram. El Paso Matters reached out to the user, but he did not respond. “Most people use it in a brain-dead way where they just copy and paste answers straight out of ChatGPT and they end up with responses that look identical to a dozen other students.”

Leslie Waters, an assistant professor of history, did not offer ChatGPT instructions at the start of the spring 2023 semester. She believed the obscure primary source material from her 20th century European history course would be AI-proof. In one case she gave students copies of letters written by soldiers and their families during World War I and asked them to write essays based on the letters’ themes.

Three of her students submitted papers that focused generally on the war, but did not mention the letters or their themes. Additionally, the essays included ChatGPT red flags: grammatically correct sentences that lacked analysis and critical thinking. Each of those students earned low scores. Waters planned to send one of those cases to OSCCR, which she said uses software that can detect AI-generated material.

“It’s not easy (for me) to prove, but it’s extremely easy for me to detect,” she said of ChatGPT work.

Her plan for the fall 2023 semester is to talk to her students about the perils of using ChatGPT, and to encourage them to stay on top of their coursework. It is her experience that students cheat out of desperation. She will give multi-level assignments that force students to submit papers at various stages to keep track of their progress.

Olimpo did not respond to several requests for the recommendations generated by his spring workshops, but he previously proposed that a faculty committee review and possibly update the university’s general course syllabus in regards to the use of AI tools.

A June 2023 article in The Chronicle of Higher Education included the results of a faculty survey of how to work with ChatGPT this fall. Two of the more popular ideas were to alter assignments to make AI participation less useful, and to incorporate AI in some work to help students understand its strengths and weaknesses.

As for El Paso Community College, its ChatGPT directive to students is to follow their professors’ instructions for assignments and the academic guidelines in the college’s Student Code of Conduct, said Keri Moe, associate vice president for External Relations Communication & Development.

“ChatGPT, like any technology available, must be used with academic integrity and in accordance with these guidelines,” Moe said.

Texas Tech University Health Sciences Campus El Paso did not respond to a request for instructions on how its leaders want faculty and students to use ChatGPT.

Academic integrity

While some faculty members want to use AI tools such as Turnitin to catch cheaters, Sarah Elaine Eaton, an associate professor in the Werklund School of Education at the University of Calgary, in Canada, advised them to not overreact.

During a May 16 virtual forum about “Academic Integrity and AI,” Eaton said that instructors should include a statement in their syllabus about the AI they plan to use to help with their assessments and inform the students about the limitations of those programs.

“It’s not about trying to use technology in order to catch students,” Eaton said during the presentation. “Nobody wins in an academic-integrity arms race. Deceptive assessment using tools and technologies without students’ knowledge ahead of time is not modeling integrity.”

Greg Beam, an associate professor of practice in UTEP Department of Communication, said that he taught an asynchronous virtual course this summer and strongly suspected that some students submitted work done by chatbots. He posted a video on Blackboard where he explained the right and wrong way to use ChatGPT.

Beam told the students that those who admitted that they used the technology improperly would be allowed to redo the assignment with no penalty. Additionally, he told them that he would contact those who did not come forward to ask them follow-up questions about their submissions to verify that they understood the material.

The professor said about 10% of those students redid the assignment. He suspected a few others, but those submissions lacked the tell-tale red flags. It made him wonder if some students had mastered ChatGPT enough to be undetectable.

“For the most part, at UTEP at least, I don’t think students want to cheat – they want to learn,” Beam said. “And they’re just as concerned about the potential ramifications of these new technologies as the rest of us are.”

This first appeared on and is republished here under a Creative Commons license.

]]>
Did Prison Inmates Cheat on High School Equivalency Exam? /article/opi-report-no-evidence-to-support-montana-prison-cheating/ Fri, 30 Dec 2022 16:30:00 +0000 /?post_type=article&p=701558 This article was originally published in

An investigation into allegations a test administrator helped inmates cheat on a high school equivalency exam at the Shelby prison found “no confirming evidence” to support the claims, according to records from the Office of Public Instruction.

“The reporting individual was unable to provide supporting evidence or additional witnesses to validate his claim,” the investigation report said.

However, testing has not resumed at the institution for a separate reason, according to CoreCivic, a private company that runs the state prison in Shelby.


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


The prison, called the Crossroads Correctional Center, houses 757 male inmates, according to a Department of Corrections data dashboard.

The testing program, HiSET, allows adults without a high school diploma to earn the equivalent of a high school degree. OPI earlier provided records that said testing had been suspended at the prison as of April.

Last week, CoreCivic director of public affairs Ryan Gustin said the Shelby prison remains a HiSET testing site. However, he said a separate technology glitch unrelated to the allegation means testing has not resumed at the facility.

“All of the IT teams are aware of this matter and are quickly working to alleviate that challenge,” Gustin said.

The issue stems from a change in test provider and ensuing technology “hiccups,” he said. Gustin said testing at the Crossroads location takes place using computers, and in the case of a provider change, “you have to get one hand to shake the other hand.”

“We don’t have the option contractually to offer a paper exam,” he said.

The Montana Department of Corrections said other “DOC-run facilities” have not had testing difficulties — and Gustin said other locations are able to use paper testing, unlike at Crossroads.

The new testing provider and CoreCivic are working to resolve the issue as quickly as possible, Gustin said, and it’s in their interest to do so.

The original cheating allegation came from a previous Crossroads inmate who said staff were sharing exam questions with inmates before testing and assisting inmates during the HiSET tests. As a result, Montana OPI suspended all HiSET testing at Crossroads and requested an investigation.

The September investigation report found no evidence of cheating. It said investigators reviewed answer sheets and test reports but could not substantiate the claims.

A Crossroads principal who retired in August agreed she had shared “test material” with test takers, but she said she had shared “released exam questions” and The Official Guide to HiSET Exam, according to the investigation report. It said a new principal was unaware of any previous infractions.

Investigators requested additional information from the inmate who made the allegations, but the report said he admitted his accusation was “based on his assumptions.” He also argued that because he had moved to a prerelease center, he was “not comfortable to freely share information,” the report said.

The investigation report also said the inmate who made the accusations may have had a different reason for bringing forward a complaint: “It is likely that this unfounded allegation could have arisen from a salary dispute between this individual and Crossroads Correction Center over tutoring hours.”

The report did not elaborate on the salary dispute.

A September letter from a senior national HiSET director provided by OPI with the investigation report and by CoreCivic said test results from the site are sound: “Tests were not compromised and HISET scores obtained by individuals testing here are valid.”

is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Daily Montanan maintains editorial independence. Contact Editor Darrell Ehrlick for questions: info@dailymontanan.com. Follow Daily Montanan on and .

]]>
The Essay’s Future: We Talk to 4 Teachers, 2 Experts and 1 AI Chatbot /article/the-future-of-the-high-school-essay-we-talk-to-4-teachers-2-experts-and-1-ai-chatbot/ Mon, 19 Dec 2022 18:01:00 +0000 /?post_type=article&p=701602 ChatGPT, an AI-powered “large language” model, is poised to change the way high school English teachers do their jobs. With the ability to understand and respond to natural language, ChatGPT is a valuable tool for educators looking to provide personalized instruction and feedback to their students. 

O.K., you’ve probably figured out by now that ChatGPT wrote that self-congratulatory opening. But it raises a question: If AI can produce a journalistic lede on command, what mischief could it unleash in high school English?

Actually, the chatbot, by the San Francisco-based R&D company Open AI, is not intended to make high school English teachers obsolete. Instead, it is designed to assist teachers in their work and help them to provide better instruction and support to their students.


Get stories like this delivered straight to your inbox. Sign up for 鶹Ʒ Newsletter


O.K., ChatGPT wrote most of that too. But you see the problem here, right?

English teachers, whose job is to get young students to read and think deeply and write clearly, are this winter coming up against a formidable, free-to-use foe that can do it all: With just a short prompt, it , , , song lyrics, short stories, , , even outlines and analyses of other writings. 

One user asked it to explaining that “Santa isn’t real and we make up stories out of love.” In five trim paragraphs, it broke the bad news from Santa himself and told the boy, “I want you to know that the love and care that your parents have for you is real. They have created special memories and traditions for you out of love and a desire to make your childhood special.”

One TikToker noted recently that users can upload a podcast, lecture, or YouTube video transcript and ask ChatGPT to take complete notes.

ChatGPT Taking Notes From YouTube

Many educators are alarmed. One high school computer science teacher last week, “I am having an existential crisis.” Many of those who have played with the tool over the past few weeks fear it could tempt millions of students to outsource their assignments and basically give up on learning to listen, think, read, or write.

Others, however, see potential in the new tool. Upon ChatGPT’s release, 鶹Ʒ queried high school teachers and other educators, as well as thinkers in the tech and AI fields, to help us make sense of this development.

Here are seven ideas, only one of which was written by ChatGPT itself:

1. By its own admission, it messes up.

When we asked ChatGPT, “What’s the most important thing teachers need to know about you?” it offered that it’s “not a tool for teaching or providing educational content, and should not be used as a substitute for a teacher or educational resource.” It also admitted that it’s “not perfect and may generate responses that are inappropriate or incorrect. It is important to use ChatGPT with caution and to always fact-check any information it provides.”

2. It’s going to force teachers to rethink their practice — whether they like it or not. 

Josh Thompson, a former Virginia high school English teacher working on these issues for the National Council of Teachers of English, said it’s naïve to think that students won’t find ChatGPT very, very soon, and start using it for assignments. “Students have probably already seen that it’s out there,” he said. “So we kind of have to just think, ‘O.K., well, how is this going to affect us?’”

Josh Thompson (Courtesy of Josh Thompson)

In a word, Thompson said, it’s going to upend conventional wisdom about what’s important in the classroom, putting more emphasis on the writing process than the product. Teachers will need to refocus, perhaps even using ChatGPT to help students draft and revise. Students “might turn in this robotic draft, and then we have a conference about it and we talk,” he said.

The tool will force a painful conversation, Thompson and others said, about the utility of teaching the standard five-paragraph essay, which he joked “should be thrown out the window anyway.” While it’s a good template for developing ideas, it’s really just a starting point. Even now, Thompson tells students to think of each of the paragraphs not as complete writing, but as the starting point for sections of a larger essay that only they can write.

3. It’s going to refocus teachers on helping students find their authentic voice.

In that sense, said Sawsan Jaber, a longtime English teacher at East Leyden High School in Franklin Park, Ill., this may be a positive development. “I really think that a key to education in general is we’re missing authenticity.”

Technology like ChatGPT may force teachers to focus less on standard forms and more on student voice and identity. It may also force students to think more deeply about the audience for their writing, which an AI likely will never be able to do effectively.

Sawsan Jaber (Courtesy of Sawsan Jaber)

“I think education in general just needs a facelift,” she said, one that helps teachers focus more closely on students’ needs. Actually, Jaber said, the benefits of a free tool like ChatGPT might most readily benefit students like hers from low-income households in areas like Franklin Park, near Chicago’s O’Hare Airport. “The world is changing, and instead of fighting it, we have to ask ourselves: ‘Are the skills that we’ve historically taught kids the skills that they still need in order to be successful in the current context? And I’m not sure that they are.”

Jaber noted that universities are asking students to do more project-based and “unconventional” work that requires imagination. “So why are we so stuck on getting kids to write the five-paragraph essay and worrying if they’re using an AI generator or something else to really come up with it?”

An AI generated image by Dall-E prompted with text “robot hanging out with cool high school students in front of lockers ” (Dall-E)

4. It could upend more than just classroom practice, calling into question everything from Advanced Placement assignments to college essays.

Shelley Rodrigo, senior director of the Writing Program at the University of Arizona, said the need for writing instruction won’t go away. But what may soon disappear is the “simplistic display of knowledge” schools have valued for decades.

Shelley Rodrigo (Courtesy of Shelley Rodrigo)

“If it’s, ‘Compare and contrast these two novels,’ O.K., that’s a really generic assignment that AI can pull stuff from the Internet really easily,” she said. But if an assignment asks students to bring their life experience to the discussion of a novel, students can’t rely on AI for help.

“If you don’t want generic answers,” she said, “don’t ask generic questions.”

In looking at coverage of the kinds of writing uploaded from ChatGPT, Rodrigo, also present-elect of NCTE, said it’s easy to see a pattern that others have commented on: Most of it looks like something that would score well on an AP exam. “Part of me is like, ‘O.K., so that potentially is a sign that that system is broken.’”

5. Students: Your teachers may already be able to spot AI-assisted writing.

While one of the advantages of relying on ChatGPT may be that it’s not technically plagiarism or even the product of an essay mill, that doesn’t mean it’s 100% foolproof.

Eric Wang (Courtesy of Eric Wang)

Eric Wang, a statistician and vice president of AI at Turnitin.com, the plagiarism-detection firm, noted that engineers there can already detect writing created by large-language “fill-in-the-next-word” processes, which is what most AI models use.

How? It tends to follow predictable patterns. For one thing, it uses fewer sophisticated words than humans do: “Words that are less frequent, maybe a little more esoteric — like the word ‘esoteric,’” he said. “Our use of rare words is more common.”

AI applications tend to use more high-probability words in expected places and “favor those more probable words,” Wang said. “So we can detect it.”

Kids: Your untraceable essay may in fact be untraceable — but it’s not undetectable. 

6. Like most technological breakthroughs, ChatGPT should be understood, not limited or banned — but that takes commitment.

L.M. Sacasas, a writer who publishes, a newsletter on technology and culture, likened the response to ChatGPT to the early days of Wikipedia: While many teachers saw that research tool as radioactive, a few tried to help students understand “what it did well, what its limitations were, what might be some good ways of using Wikipedia in their research.”

In 2022, most educators — as well as most students — now see that Wikipedia has its place. A well-constructed page not only helps orient a reader; it’s also “kind of a launching pad to other sources,” Sacasas said. “So you know both what it can do for you and what it can’t. And you treat it accordingly.” 

Sacasas hopes teachers use the same logic with ChatGPT.

More broadly, he said, teachers must do a better job helping students see how what they’re learning has value. So far, “I think we haven’t done a very good job of that, so that it’s easier for students to just take the shortcut” and ask software to fill in rather meaningless blanks.

If even competent students are simply going through the motions, he said, “that will encourage students to make the worst use of these tools. And so the real project for us, I’m convinced, is just to instill a sense of the value of learning, the value of engaging texts deeply, the value of aesthetic pleasure that cannot be instrumentalized. That’s very hard work.”

An AI generated image by Dall-E prompted with text “classroom full of robots sitting at desks.” (Dall-E)

7. Underestimate it at your peril.

Open AI’s Sam Altman earlier this month tried to lower expectations, that the tool “is incredibly limited, but good enough at some things to create a misleading impression of greatness.”

How does it feel, Bob Dylan, to see an AI chatbot write a song in your style about Baltimore? (Getty Images)

Ask ChatGPT to write a , for example, and … well, it’s not very good or very Dylanesque at the moment. The chorus:

Baltimore, Baltimore

My home away from home

The people are friendly

And the crab cakes are to die for.

Altman added, “It’s a mistake to be relying on it for anything important right now.” 

Jake Carr (Courtesy of Jake Carr)

The tool’s capabilities in many ways may not be very sophisticated now, said , an English teacher in northern California. “But we’re fooling ourselves if we think something like ChatGPT isn’t only going to get better.”

Carr asked the tool to write a short story about “kids who ride flying narwhals” and got a rudimentary “Golden Books” sort of tale. But then he got an idea: Could it produce an outline of such a story using Joseph Campbell’s “” template?

It could and it did, producing “a pretty darn good outline” that used all of the storytelling elements typically present in popular fiction and screenplays.

He also cut-and-pasted several of his students’ essay drafts into the tool and asked it to grade each one based on a rubric he provided.

Revolutionizing the English classroom with AI—how can we use technology to enhance student learning and engagement? 🤖 📚

“I tell you what: It’s not bad,” he said. The tool even isolated each essay’s thesis statement.

Carr, who frequently posts TikToks about tech, admitted that ChatGPT is scary for many teachers, but that they should play with it and consider how it forces them to think more deeply about their work. “If we don’t talk about it, if we don’t begin the conversation, it’s going to happen anyways and we just won’t get to be part of the conversation,” he said. “We just have to be forward thinking and not fear change.”

But perhaps we shouldn’t be too sanguine. Asked to write a haiku about is own potential for mayhem, ChatGPT didn’t mince words:

Artificial intelligence

Powerful and dangerous

Beware, for I am here

]]>
VIDEO: New Research Shows True Toll Cheating Scandal Took on Atlanta’s Most Vulnerable Students /article/video-new-research-shows-true-toll-cheating-scandal-took-on-atlantas-most-vulnerable-students/ Sat, 01 Jan 2000 00:00:00 +0000 When teachers cheat, students ultimately suffer in the long run.

Everyone who follows education news remembers the Atlanta cheating scandal, in which several teachers went to jail. But what happened to the students whose test scores were manipulated?

Research now that those students’ scores on later tests were noticeably lower, with enduring effects on English exams. (There’s also evidence that cheating led to an uptick in high school dropout rates among students whose tests were fudged)

The researchers suggest that the students were harmed because they didn’t get the remediation or extra support services they needed since their true academic standing was masked by the phony scores.

In other words, cheating on high-stakes exams doesn’t just corrupt accountability systems; it also inflicts lasting harm on students who were denied meaningful evaluations.

This was one of several new research papers discussed at a organized by CALDER — a research group affiliated with the American Institutes for Research. Read more of Matt’s top takeaways from the event here.

]]>