Type to search

Category: Assignments

Someone entering a question into ChatGPT
Stock photo infographic showing global climate vulnerabilities

The arrival of ChatGPT sent shockwaves across academia as articles with titles like “Yes, We Are in a (ChatGPT) Crisis” splashed across higher education media. Reports of students using it to write their papers led to the immediate goal of keeping students away from AI.

Then a counter movement started when instructors realized they could use AI to cut time off their tasks. Articles came out of how to use AI to create lessons, provide feedback to students, generate assessments, write video scripts, and other time-saving tasks. Institutions have also been using AI in chatbots to answer student questions for a couple of years.

There is also a growing understanding that students will use AI in their future work, and as higher education is meant to prepare students for the future, it would do better to teach students how to use it than adopt the Luddite position of forbidding its use. AI is just another tool to assist humans in their endeavors. It is like the ship’s computer on Star Trek, which would answer questions to provide the crew with valuable information for decision making. That is how the tool is being used now and will be used in the future. For instance, astronomers use it to scan images of millions of stars to find anomalies. Now that the initial shock has abated, we can take a more levelheaded look at the real dangers of AI and how to incorporate it into assignments that prepare students for the world that they will enter.

What is the real danger of AI?

Higher education has two main worries about AI. One, students can use it to write papers, making plagiarism easier. Two, it might give students false information. Each is a bit of a red herring.

First, there are AI checkers, such as AI Detector, through which instructors can run student work. A lot been said about the fact that these checkers are not perfect, but neither are ordinary plagiarism checkers like Turnitin, and that has not created similar hand-wringing in academia. Accuracy claims for these detectors range from 95 to 99 percent, and I personally found AI Detector remarkably accurate with some test cases.

But the point is that the situation is no different from ordinary plagiarism. There are ways to fool Turnitin, just as there are ways to fool AI detectors. There is nothing we can do about that other that have institutional policies against plagiarism and do our best to detect it. We have laws against murder and police to investigate it, but people still kill, and we go about our business despite this fact. Higher education needs to do the same. The possibility of plagiarism says nothing about whether we should assign students to use AI, just as the possibility of ordinary plagiarism has not stopped us from giving students writing assignments.

As for accuracy, there seems to be a widespread assumption that AI-generated information must be wrong because it draws from the unlettered masses rather than ivory-tower sages, but I have done some test queries and found the results remarkably accurate. Plus, plenty of academic articles have been found to contain incorrect information or are outright fraudulent, and the Wisdom of Crowds is the proven fact that for certain types of questions the aggregate answer of a large group of amateurs will be more accurate than that of a small group of professionals.

We insist that students cite sources for any factual claims, and if the AI system they use does not provide a source, then students need to find one with that information if they are to use it in their work. Note that Google is currently experimenting with an AI system that does provide sources, as seen in the screenshot below. Faculty can recommend that students use it for their research.

Results for the query, "What are some ethical issues with genetics?" The Google AI list five ethical issues and links to reputable sources from which it sourced these issues.

Higher education has moved away from having students memorize information on grounds that there simply is too much information to memorize. We now teach information literacy, which is knowledge of how to find information using available tools. AI is just the latest advancement in information retrieval, and higher education needs to focus on teaching how to use it.

Plus, as finding information gets easier and easier, learning how to evaluate and apply it becomes more and more important. Faculty should focus assignments more on critical thinking.

AI assignments

Rather than try to delineate all the various assignments that can use AI, it is easier to put them into categories for faculty to use as they wish. Here are two such categories.

Research on AI

This kind of assignment makes AI itself the focus. An instructor can assign students to choose a class topic and ask an AI system to answer a question about it, such as the example in the screenshot above about the ethical issues with genetics. Students would then evaluate the answer by comparing it with other sources. They would answer questions like the following:

  1. How comprehensive is the answer? What topics were left out?
  2. How accurate is the answer? Was some information wrong, and if so, which information?
  3. Did the answer represent any biases?

The instructor can also require students to ask the same question in different ways and evaluate how the answers differ. In this way, students learn how an AI system interprets a question and produces results. That knowledge will inform how they use AI systems in the future. Plus, they are learning about the topic through their use of AI responses and comparative research on those responses.

AI as the starting point for research

A second assignment type is for students to use AI to gain an overview of the topic and then pursue it in more detail with focused resources, similar to how Wikipedia is already used. Here students pick a topic and ask a couple of AI systems (e.g., ChatGPT and Google Bard) a question about it so they get a range of answers. They combine the answers to get the lay of the land on that topic and then build their work from other resources on the topic.

For both assignments, students submit the AI results of their query and the product that they created. This allows instructors to distinguish student thinking from the AI output.

AI evaluation of student work

Besides research, students can use AI to generate feedback on their work. The feedback ChatGPT provides focuses on general writing topics, such as composition and detail. It will not provide much feedback on substantive issues, such as factual errors or missing topics. But this is a good way for students to improve the clarity of their writing before submitting it to the instructor. See the first part of feedback ChatGPT provided on a sample student work below:

ChatGPT feedback on how to improve an essay titled "The Evolution of CSR in the 21st Century: Embracing Technology and Ethical Management." Feedback focuses on the intro, definitions of technologies, and a section on challenges and recommendations.

Instructors can encourage students to use Grammarly or the internal writing checker on their word processing program to address simple writing mistakes in grammar and spelling and then submit the work to an AI system to improve the clarity of the writing. This will free up instructor time from working on writing errors and allow them to focus on the thinking issues that they would rather discuss anyway. This use of AI is not much different from students doing peer reviews, which instructors have learned improves student work. It also provides students with skills that they can apply to their future work.

These are just a few ways to teach students about how to use AI in their work. Undoubtedly, more will come as systems develop. But in the end AI is just another tool, and the job of higher education is to teach students how to use it to be more successful in the future.