AI use is exposing people’s lack or disregard of knowledge, honesty, and accountability. If we want to avoid letting AI make us ignorant, dishonest, and uncountable to broader social good, we must commit to elevating and maintaining fairly high standards in these three regards.
Published in The Republica on June 22, 2024
At a gathering of department chairs and other academic leaders some time ago, my university’s faculty development center did a fascinating warm up activity. They gave all attendees green, yellow, and red stickers, asking them to attach one to each of the several statements glued to the wall: how well do you think ChatGPT can do this task? After we all went around to stick green stickers where we thought ChatGPT can do an excellent job, yellow for an okay job, and red for indicating it can’t do that kind of task, we had an open conversation.
The conversation revealed an extremely important reality about public perception about artificial intelligence (AI) tools: people, including professors, overvalue its capacity in fields outside their expertise–and that has serious consequences. For example, I was the only writing professor in the room, and I noticed that besides me, only a colleague from the philosophy department stuck a red sticker under the statement indicating that ChatGPT can draft the kinds of essay assignments we give to our students. To me, it was shocking that computer scientists, economists, medical science scholars, and business professors alike would believe that AI tools can “do writing just fine” (to borrow the words of a faculty trainer at another event). Similarly, the conversation made it clear that while most others put green stickers on ChatGPT’s ability to complete coding assignments, computer science professors in the room did not believe that. They knew better, as I knew better about writing. And the same was true about other disciplines.