Teachers use AI to grade essays. But some experts raise ethical concerns

Teachers use AI to grade essays. But some experts raise ethical concerns

When Diane Gayeski, a professor of strategic communication at Ithaca College, received an essay from one of her students, she ran part of it through ChatGPT, asking the AI tool to critique and suggest ways to improve the work.

“The best way to look at AI for grading is as a teaching assistant or research assistant that might do a first pass … and it does a pretty good job at that,” he told CNN.

He showed his students feedback from ChatGPT and how the tool rewrote their essays. “I’ll share my thoughts on their intro as well, and we’ll talk about it,” he said.

Gayeski required his class of 15 students to do the same: run their drafts through ChatGPT to see where they could make improvements.

The advent of AI is reshaping education, providing real benefits, such as automating some tasks to free up time for more personalized instruction, but also some major dangers, from issues around accuracy and plagiarism to maintaining integrity.

Both teachers and students use new technology. A report by strategy consulting firm Tyton Partners, sponsored by plagiarism detection platform Turnitin, found that half of college students were using AI tools in Fall 2023. Meanwhile, while fewer faculty members were using AI, that percentage rose to 22% of faculty members in the fall fall 2023, up from 9% in spring 2023.

Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to help grade papers, write feedback, develop lesson plans and create assignments. They also use evolving tools to create quizzes, polls, videos and interactives to up the ante” for what to expect in the classroom.

Students, on the other hand, rely on tools like ChatGPT and Microsoft CoPilot — built into Word, PowerPoint and other products.

But while some schools have formed policies about how students can or can’t use AI for school work, most schools have no guidelines for teachers. The practice of using AI to write feedback or grade assignments also raises ethical considerations. And parents and students who have already spent hundreds of thousands of dollars on tuition may wonder whether the endless feedback loop of AI-generated and AI-graded content in college is worth the time and money.

“If teachers use it solely to assess, and students use it solely to produce a final product, it’s not going to be effective,” Gayeski said.

A time and place for AI
How teachers use AI depends on many factors, especially when it comes to grading, according to Dorothy Leidner, a professor of business ethics at the University of Virginia. If the material being tested in large classes is largely declarative knowledge — so there’s a clear right and wrong — then grading teachers using AI “may be better than human grading,” he told CNN.

AI will allow teachers to grade papers faster and more consistently and avoid fatigue or boredom, he said.

But Leidner notes that when it comes to smaller classes or assignments with less certain answers, grading should remain personalized so teachers can provide more specific feedback and recognize student work, and, therefore, progress over time.

“A teacher should be responsible for grading but can give some responsibility to the AI,” he said.

He suggests teachers use AI to look at certain metrics — such as structure, language use and grammar — and assign a numerical score to those figures. But the teacher should evaluate the student’s work himself when looking for novelty, creativity and depth of insight.

Leslie Layne, who has taught ChatGPT best practices in her writing workshops at Lynchburg University in Virginia, says she sees advantages for teachers but also disadvantages.

“Using feedback that wasn’t really from me seemed to diminish that relationship a little bit,” he said.

He also sees uploading student work to ChatGPT as a “huge ethical consideration” and potentially infringing on their intellectual property. AI tools like ChatGPT use such entries to train their algorithms on everything from speech patterns to how to make sentences to facts and figures.

Ethics professor Leidner agrees, saying this should be avoided especially for doctoral dissertations and master’s theses because the student may hope to publish the work.

“It is not right to upload material into AI without informing the students about this first,” he said. “And maybe students have to give consent.”

Some teachers rely on software called Writable that uses ChatGPT to help grade papers but is “marked”, so essays don’t include any personal information and they aren’t shared directly with the system.

Teachers upload essays to the platform, recently acquired by education company Houghton Mifflin Harcourt, which then provides suggested feedback for students.

Other educators use platforms like Turnitin that have plagiarism detection tools to help teachers identify when assignments are written by ChatGPT and other AIs. But this type of detection tool is far from simple; OpenAI shut down its own AI detection tool last year because of what the company called “low accuracy rates.”

Setting standards
Some schools are actively working on policies for both teachers and students. Alan Reid, a research associate at the Center for Research and Reform in Education (CRRE) at Johns Hopkins University, said he recently spent time working with K-12 educators who used the GPT tool to create personalized end-of-quarter reviews on report cards.

But like Layne, he admits the technology’s ability to write insightful feedback remains “limited.”

He currently sits on a committee at his college that authored an AI policy for faculty and staff; discussions are ongoing, not just for how teachers use AI in the classroom but how it is used by educators in general.

He acknowledged schools are having conversations about using generative AI tools to create things like promotion and tenure files, performance reviews and job postings.

Nicolas Frank, an associate professor of philosophy at Lynchburg University, said universities and professors need to be on the same page when it comes to policy but need to be cautious.

“There are many dangers in making policy on AI at this stage,” he said.

He worries that it is still too early to understand how AI will be integrated into everyday life. He also worries that some administrators who don’t teach in the classroom might make policies that miss the nuances of teaching.

“That may create a danger of oversimplifying the problem of using AI in grading and instruction,” he said. “Oversimplification is how bad policy is made.”

For starters, he says educators can identify obvious abuses of AI and start formulating policies around them.

Meanwhile, Leidner says universities can be very high-level with their guidance, such as making transparency a priority — so students have the right to know when AI is used to evaluate their work — and identifying what types of information can’t be uploaded into AI or asked AI.

But he said universities must also be open to “regular re-evaluation as technology and use evolve.”

About Kepala Bergetar

Kepala Bergetar Kbergetar Live dfm2u Melayu Tonton dan Download Video Drama, Rindu Awak Separuh Nyawa, Pencuri Movie, Layan Drama Online.

Leave a Reply

Your email address will not be published. Required fields are marked *