Using AI: Credit, Content, Credibility [Republica Repost]

AI is blurring boundaries of all kinds, so we all need a communicative philosophy to help us set boundaries for content, credit, and credibility when using AI tools. Learning with and from AI is flawed because their content is inaccurate or incomplete due to their extremely narrow datasets. It is also problematic because AI’s algorithms (or thought patterns) are limited to certain discourse traditions and practices. AI content can be unreliable because the tools are mostly designed to “generate” plausible patterns rather than locate and synthesize information (so hallucination is a feature not a bug).


Published in The Republica on May 16, 2024

An image depicting the tension between human judgment and AI-generated content. The scene shows a college professor at a desk with books and papers on one side and a futuristic AI interface on the other, both generating text. The professor is carefully reviewing a paper, symbolizing critical judgment, while a student is standing in front of the desk, holding a paper with a confused expression. Behind the professor, a digital scale balances between the AI interface and the professor's books, representing the balance between AI assistance and human evaluation. The background subtly hints at an academic setting with elements like a chalkboard with complex equations and diagrams, further emphasizing the theme of learning and intellectual growth. The lighting highlights the professor's thoughtful expression and the paper in hand, underscoring the importance of critical thinking and transparency in education.

Faced with machines that seem to (and often actually) match our linguistic abilities, students, professionals, and the general public alike are struggling to maintain boundaries for effective learning, professional relationship, and honest communication. The usefulness of any AI tool is at the intersection between its ability to generate content and our ability to judge that content knowledgeably and skillfully. If a tool generates more than we can judge, we cross the zone of safety and enter the zone of dangers, which risks undermining the value of the content, making the credit we seek undeserved, and threatening our credibility and human relationship.

 AI and credit

Let us begin with learning. Imagine that you were a college professor before the internet, and you learned that one of your students submitted to you a research paper that he had asked his cousin to write. Imagine that he actively guided his cousin to meet your expectations for the top grade. Most likely you would not have given that student the top grade. Now imagine that you are a college professor today, and you just learned that a student has submitted a paper he prompted an AI tool (such as ChatGPT) to write. Imagine that he skillfully used ChatGPT to produce the paper, and the final product meets your expectations for the top grade, while he learned little from the process. Would you give the student the top grade? Continue reading

We’re Hallucinating, Not AI [Republica Repost]

When lawyers lie, doctors kill, or teachers fool and then deflect responsibility to a machine, we must see these problems as the visible tip of humanity’s collective hallucination. 

Published in The Republica on March 17, 2024

We’re Hallucinating, Not AI Cohen, a former lawyer of a former US president was recently caught submitting AI-generated (meaning, fake) legal cases to court. He was appealing to shorten a probation for lying to Congress earlier. But Cohen is not a rare case. Artificial intelligence tools are convincing millions of educated people in highly sensitive professions and across the world, and not just everyday people using them for mundane purposes. Doctors are embracing AI-generated assessments and solutions, and engineers even more.

Most people know that AI tools are designed to generate plausible sentences and paragraphs, rather than just locate or synthesize real information. The “G” in ChatGPT means “generative,” or making up. But AI tools like this are so powerful in processing information in their datasets that they can produce stunningly credible-looking responses that range from the most reliable to the most absurd (called “hallucination”) on a wide range of topics. And they cannot distinguish between the absurd and the reliable: human users have to do that. The problem is that because people are increasingly trusting AI tools without using their own judgment, humanity is becoming the party that is collectively hallucinating, more than AI. Factors like speed, convenience, gullibility, and a seeming desire to worship the mystic appearance of a non-human writer/speaker are all leading humans to ignore warnings that even AI developers themselves are placing everywhere from their log in screens to about pages.

Deflecting responsibility 

Continue reading

Educating Beyond the Bots [Republica Repost]

The current discourse about artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default. 

Published in The Republica on February 12, 2024

Educating Beyond the BotsIt has saved me 50 hours on a coding project,” whispered one of my students to me in class recently. He was using the artificial intelligence tool named ChatGPT for a web project. His classmates were writing feedback on his reading response for the day, testing a rubric they had collectively generated for how to effectively summarize and respond to an academic text.

The class also observed ChatGPT’s version of the rubric and agreed that there is some value in “giving it a look in the learning process.” But they had decided that their own brain muscles must be developed by grappling with the process of reading and summarizing, synthesizing and analyzing, and learning to take intellectual positions, often across an emotionally felt experience. Our brain muscles couldn’t be developed, the class concluded, by simply looking at content gathered by a bot from the internet, however good that was. When the class finished writing, they shared their often brutal assessment of the volunteer writer’s response to the reading. The class learned by practicing, not asking for an answer.

Beyond the classroom, however, the discourse about artificial intelligence tools “doing writing” has not yet become as nuanced as among my college students. “The college essay is dead,” declared Stephen Marche of the Atlantic recently. This argument is based on a serious but common misunderstanding of a means of education as an end. The essay embodies a complex process and experience that teach many useful skills. It is not a simple product.

But that misunderstanding is just the tip of an iceberg. The current discourse about artificial intelligence not only reflects a shrunken view of education. It also represents a constant romanticization of, or alarmism about, new technologies influencing education. And most saddening for educators like me, it shows a disregard toward students as dishonest by default.

Broaden the view of education Continue reading

Educating Beyond the Bots [Republica Repost]

Published in Republica on February 12, 2023

The current discourse about artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default. 

“It has saved me 50 hours on a coding project,” whispered one of my students to me in class recently. He was using the artificial intelligence tool named ChatGPT for a web project. His classmates were writing feedback on his reading response for the day, testing a rubric they had collectively generated for how to effectively summarize and respond to an academic text.

The class also observed ChatGPT’s version of the rubric and agreed that there is some value in “giving it a look in the learning process.” But they had decided that their own brain muscles must be developed by grappling with the process of reading and summarizing, synthesizing and analyzing, and learning to take intellectual positions, often across an emotionally felt experience. Our brain muscles couldn’t be developed, the class concluded, by simply looking at content gathered by a bot from the internet, however good that was. When the class finished writing, they shared their often brutal assessment of the volunteer writer’s response to the reading. The class learned by practicing, not asking for an answer.

Beyond the classroom, however, the discourse about artificial intelligence tools “doing writing” has not yet become as nuanced as among my college students. “The college essay is dead,” declared Stephen Marche of the Atlantic recently. This argument is based on a serious but common misunderstanding of a means of education as an end. The essay embodies a complex process and experience that teach many useful skills. It is not a simple product.

But that misunderstanding is just the tip of an iceberg. The current discourse about artificial intelligence not only reflects a shrunken view of education. It also represents a constant romanticization of, or alarmism about, new technologies influencing education. And most saddening for educators like me, it shows a disregard toward students as dishonest by default.

Broaden the view of education

If we focus on writing as a process and vehicle for learning, it is fine to kill the essay as a mere product. It is great if bot-generated texts serve certain purposes. Past generations used templates for letters and memos, not to mention forms to fill. New generations will adapt to more content they didn’t write.

What bots should not replace is the need for us to grow and use our own minds and conscience, to judge when we can or should use a bot and how and why. Teachers must teach students how to use language based on contextual, nuanced, and sensitive understanding of the world. Students must learn to think for themselves, with and without using bots.

Continue reading

Unteaching Tyranny [Republica Repost]

It is possible and necessary to use technology to empower and inspire, not be tyrannical. If nothing else, the harrowing global pandemic must help educators come to our senses about the overuse and misuse of authority.


When a fellow professor in a teacher training program said last month that he takes attendance twice during class since going online, I was surprised by the tyrannical idea. What if a student lost internet connection or electricity, ran out of data or was sharing a device, had family obligations or a health problem? We’re not just “going online,” we’re also going through a horrifying global pandemic!

At a workshop on “humanizing pedagogy” for a Bangladeshi university more recently, when asked to list teaching/learning difficulties now, many participants listed challenges due to student absence, disengagement, dishonesty, and expectation of easy grades. When asked to list instructional solutions, many proposed technocratic and rather authoritarian methods. The very system of our education, I realized, is tyrannical and most of us usually try to make it work as it is.

Tyranny, now aided by technology, goes beyond formal education. “You can only fill your bucket if you’ve brought it empty,” said a young yoga instructor in Kathmandu, on Zoom last week. She kept demanding, by name, that participants turned on their video feed. We kept turning it off as needed. Someone kept individually “spotlighting” us on screen. But we were always muted, even as we were constantly asked to respond to instructor questions by chat, thumbs up, hand wave, and smile. Technology magnified autocratic tendencies, undermining the solemnity of yoga.

The quality of yoga lectures and instruction didn’t match the technologically enforced discipline. “Our lungs remove ninety percent of toxins from our body,” said an instructor. Surya namaskar fixes both overweight and underweight, said another, as well as cancer and diabetes. Googling these claims led to junk websites. I quickly became an unengaged learner, waiting for lectures to be over. I read a book on yoga during lectures, or took notes on how technology can magnify tyrannical elements of instruction and academe. I reflected on how to make my own teaching more humane.

This essay is a broader commentary on the element of tyranny in education. But to show that the idea of making teaching more humane is not just a romantic ideal, I share how we can operationalize the concept, including and especially during this disrupted time.

Operationalizing humanity

Continue reading

Abolish Private Education? [Republica Respost]

Published Jan. 2, 2019

To truly counter arguments about abolishing it, private sector education must rethink its socioeconomic roles in the new national context, creating robust models of faculty development.

The private sector in Nepal’s education has always been controversial, including in national policy discourse. A currently hot-button topic is whether it should even exist or if it should be gradually abolished.

In response, instead of focusing on what to do about problems undergirding the “attack” on its existence, its leaders tend to go on counter-attack, offering no substantive social vision.

Public education is criticized as well, its sustainability questioned as often. “TU is dead,” said the title of an article some time ago here. “Unacademic activities of academics,” said that of another one, in Nepali, mixing up facts and accusations in the essay. Continue reading

Scale what?

I was recently participating in a webinar about a MOOC-style first-year writing course, and a few words kept confusing me. Content. Delivery. Scale. . . .  Especially the last one stops me in my tracks.

SCALING?

When Tenzing Norgay Sherpa and Edmund Hillary climbed Mt. Everest for the first time, they weren’t doing numbers. They were undertaking a superhuman challenge. It was a qualitative matter. It was a matter of inspiration. Making the impossible accessible. Showing that someone could actually do it. Redefining success. “Because it was there.” It, the mountain that had killed countless people for trying. They were scaling the un-scale-able.

“The real value add[ed] of higher education,” says Joshua Kim, writing on Inside HigherEd,  “cannot occur at web scale. It can only occur at human scale.” That scale occurs “where a skilled and passionate educator interacts directly with a student to guide and shape their learning.” As Kim adds, in an article meant to debunk myths and criticism of open, at-scale online education (not a critique of it), “[o]pen online courses at scale expose just how valuable, essential, and irreplaceable are our tight-knit learning communities. Never before has the teaching efforts of a gifted, knowledgeable and passionate instructor . . . been as valuable and as essential.” Online education at scale has to somehow find ways to substitute one-on-one and/or face-to-face human interaction, decreasing the time and attention given by an educator to learners who can ask questions, feel the presence. In a writing class, only some things can be scaled without fundamentally compromising learning. Continue reading