The arrival of ChatGPT sent shockwaves across academia as articles with titles like “Yes, We Are in a (ChatGPT) Crisis” splashed across higher education media. Reports of students using it to write their papers led ...
Evidence shows what many faculty already know: that many students are not doing the assigned readings for their classes. The numbers are striking. Today’s college student spends an average of six to seven hours per ...
Infographics have become a nearly ubiquitous accompaniment to written reports and presentations. For today’s students, visual communication skills are as important as written and verbal ones. Plus, adding an infographic to a text assignment gets ...
Most instructors know the value of YouTube videos for supplementing instructional material. YouTube has a wealth of instructor- and expert-created content that can vividly illustrate course concepts. But instructors can use YouTube for more than ...
Assigned short papers with research and application components expose students to the analytical process of research and writing; too often, however, students turn in papers without any appropriate or reliable sources. We have noted in ...
We all do it. The semester is drawing to a close, the students are tired, we’re tired. There’s one more assignment to hand out, that major project that’s supposed to somehow capture from the entirety ...
Most academic courses require learners to do some amount of reading to provide background in core concepts, to demonstrate applications and use cases, and to elaborate intriguing new applications and directions—or maybe just to navigate ...
What do you want your students to learn? What really matters to you? I’ll give you an example. William Carlos Williams once wrote, It is difficult to get the news from poems yet men die miserably every day for ...
Student assignments and course materials turning up on the internet is an ongoing problem for instructors as it facilitates cheating. After finding our quiz questions in decks of “flash cards” on Chegg and our students’ ...
As faculty we tend to chalk up students’ failure on assessments to lack of effort or lack of understanding of the material. But often that failure is due instead to the gap between instruction and ...
The arrival of ChatGPT sent shockwaves across academia as articles with titles like “Yes, We Are in a (ChatGPT) Crisis” splashed across higher education media. Reports of students using it to write their papers led to the immediate goal of keeping students away from AI.
Then a counter movement started when instructors realized they could use AI to cut time off their tasks. Articles came out of how to use AI to create lessons, provide feedback to students, generate assessments, write video scripts, and other time-saving tasks. Institutions have also been using AI in chatbots to answer student questions for a couple of years.
There is also a growing understanding that students will use AI in their future work, and as higher education is meant to prepare students for the future, it would do better to teach students how to use it than adopt the Luddite position of forbidding its use. AI is just another tool to assist humans in their endeavors. It is like the ship’s computer on Star Trek, which would answer questions to provide the crew with valuable information for decision making. That is how the tool is being used now and will be used in the future. For instance, astronomers use it to scan images of millions of stars to find anomalies. Now that the initial shock has abated, we can take a more levelheaded look at the real dangers of AI and how to incorporate it into assignments that prepare students for the world that they will enter.
Higher education has two main worries about AI. One, students can use it to write papers, making plagiarism easier. Two, it might give students false information. Each is a bit of a red herring.
First, there are AI checkers, such as AI Detector, through which instructors can run student work. A lot been said about the fact that these checkers are not perfect, but neither are ordinary plagiarism checkers like Turnitin, and that has not created similar hand-wringing in academia. Accuracy claims for these detectors range from 95 to 99 percent, and I personally found AI Detector remarkably accurate with some test cases.
But the point is that the situation is no different from ordinary plagiarism. There are ways to fool Turnitin, just as there are ways to fool AI detectors. There is nothing we can do about that other that have institutional policies against plagiarism and do our best to detect it. We have laws against murder and police to investigate it, but people still kill, and we go about our business despite this fact. Higher education needs to do the same. The possibility of plagiarism says nothing about whether we should assign students to use AI, just as the possibility of ordinary plagiarism has not stopped us from giving students writing assignments.
As for accuracy, there seems to be a widespread assumption that AI-generated information must be wrong because it draws from the unlettered masses rather than ivory-tower sages, but I have done some test queries and found the results remarkably accurate. Plus, plenty of academic articles have been found to contain incorrect information or are outright fraudulent, and the Wisdom of Crowds is the proven fact that for certain types of questions the aggregate answer of a large group of amateurs will be more accurate than that of a small group of professionals.
We insist that students cite sources for any factual claims, and if the AI system they use does not provide a source, then students need to find one with that information if they are to use it in their work. Note that Google is currently experimenting with an AI system that does provide sources, as seen in the screenshot below. Faculty can recommend that students use it for their research.
Higher education has moved away from having students memorize information on grounds that there simply is too much information to memorize. We now teach information literacy, which is knowledge of how to find information using available tools. AI is just the latest advancement in information retrieval, and higher education needs to focus on teaching how to use it.
Plus, as finding information gets easier and easier, learning how to evaluate and apply it becomes more and more important. Faculty should focus assignments more on critical thinking.
Rather than try to delineate all the various assignments that can use AI, it is easier to put them into categories for faculty to use as they wish. Here are two such categories.
This kind of assignment makes AI itself the focus. An instructor can assign students to choose a class topic and ask an AI system to answer a question about it, such as the example in the screenshot above about the ethical issues with genetics. Students would then evaluate the answer by comparing it with other sources. They would answer questions like the following:
The instructor can also require students to ask the same question in different ways and evaluate how the answers differ. In this way, students learn how an AI system interprets a question and produces results. That knowledge will inform how they use AI systems in the future. Plus, they are learning about the topic through their use of AI responses and comparative research on those responses.
A second assignment type is for students to use AI to gain an overview of the topic and then pursue it in more detail with focused resources, similar to how Wikipedia is already used. Here students pick a topic and ask a couple of AI systems (e.g., ChatGPT and Google Bard) a question about it so they get a range of answers. They combine the answers to get the lay of the land on that topic and then build their work from other resources on the topic.
For both assignments, students submit the AI results of their query and the product that they created. This allows instructors to distinguish student thinking from the AI output.
Besides research, students can use AI to generate feedback on their work. The feedback ChatGPT provides focuses on general writing topics, such as composition and detail. It will not provide much feedback on substantive issues, such as factual errors or missing topics. But this is a good way for students to improve the clarity of their writing before submitting it to the instructor. See the first part of feedback ChatGPT provided on a sample student work below:
Instructors can encourage students to use Grammarly or the internal writing checker on their word processing program to address simple writing mistakes in grammar and spelling and then submit the work to an AI system to improve the clarity of the writing. This will free up instructor time from working on writing errors and allow them to focus on the thinking issues that they would rather discuss anyway. This use of AI is not much different from students doing peer reviews, which instructors have learned improves student work. It also provides students with skills that they can apply to their future work.
These are just a few ways to teach students about how to use AI in their work. Undoubtedly, more will come as systems develop. But in the end AI is just another tool, and the job of higher education is to teach students how to use it to be more successful in the future.