Using AI: Credit, Content, Credibility [Republica Repost]

AI is blurring boundaries of all kinds, so we all need a communicative philosophy to help us set boundaries for content, credit, and credibility when using AI tools. Learning with and from AI is flawed because their content is inaccurate or incomplete due to their extremely narrow datasets. It is also problematic because AI’s algorithms (or thought patterns) are limited to certain discourse traditions and practices. AI content can be unreliable because the tools are mostly designed to “generate” plausible patterns rather than locate and synthesize information (so hallucination is a feature not a bug).


Published in The Republica on May 16, 2024

An image depicting the tension between human judgment and AI-generated content. The scene shows a college professor at a desk with books and papers on one side and a futuristic AI interface on the other, both generating text. The professor is carefully reviewing a paper, symbolizing critical judgment, while a student is standing in front of the desk, holding a paper with a confused expression. Behind the professor, a digital scale balances between the AI interface and the professor's books, representing the balance between AI assistance and human evaluation. The background subtly hints at an academic setting with elements like a chalkboard with complex equations and diagrams, further emphasizing the theme of learning and intellectual growth. The lighting highlights the professor's thoughtful expression and the paper in hand, underscoring the importance of critical thinking and transparency in education.

Faced with machines that seem to (and often actually) match our linguistic abilities, students, professionals, and the general public alike are struggling to maintain boundaries for effective learning, professional relationship, and honest communication. The usefulness of any AI tool is at the intersection between its ability to generate content and our ability to judge that content knowledgeably and skillfully. If a tool generates more than we can judge, we cross the zone of safety and enter the zone of dangers, which risks undermining the value of the content, making the credit we seek undeserved, and threatening our credibility and human relationship.

 AI and credit

Let us begin with learning. Imagine that you were a college professor before the internet, and you learned that one of your students submitted to you a research paper that he had asked his cousin to write. Imagine that he actively guided his cousin to meet your expectations for the top grade. Most likely you would not have given that student the top grade. Now imagine that you are a college professor today, and you just learned that a student has submitted a paper he prompted an AI tool (such as ChatGPT) to write. Imagine that he skillfully used ChatGPT to produce the paper, and the final product meets your expectations for the top grade, while he learned little from the process. Would you give the student the top grade? Continue reading

We’re Hallucinating, Not AI [Republica Repost]

When lawyers lie, doctors kill, or teachers fool and then deflect responsibility to a machine, we must see these problems as the visible tip of humanity’s collective hallucination. 

Published in The Republica on March 17, 2024

We’re Hallucinating, Not AI Cohen, a former lawyer of a former US president was recently caught submitting AI-generated (meaning, fake) legal cases to court. He was appealing to shorten a probation for lying to Congress earlier. But Cohen is not a rare case. Artificial intelligence tools are convincing millions of educated people in highly sensitive professions and across the world, and not just everyday people using them for mundane purposes. Doctors are embracing AI-generated assessments and solutions, and engineers even more.

Most people know that AI tools are designed to generate plausible sentences and paragraphs, rather than just locate or synthesize real information. The “G” in ChatGPT means “generative,” or making up. But AI tools like this are so powerful in processing information in their datasets that they can produce stunningly credible-looking responses that range from the most reliable to the most absurd (called “hallucination”) on a wide range of topics. And they cannot distinguish between the absurd and the reliable: human users have to do that. The problem is that because people are increasingly trusting AI tools without using their own judgment, humanity is becoming the party that is collectively hallucinating, more than AI. Factors like speed, convenience, gullibility, and a seeming desire to worship the mystic appearance of a non-human writer/speaker are all leading humans to ignore warnings that even AI developers themselves are placing everywhere from their log in screens to about pages.

Deflecting responsibility 

Continue reading

Educating Beyond the Bots [Republica Repost]

The current discourse about artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default. 

Published in The Republica on February 12, 2024

Educating Beyond the BotsIt has saved me 50 hours on a coding project,” whispered one of my students to me in class recently. He was using the artificial intelligence tool named ChatGPT for a web project. His classmates were writing feedback on his reading response for the day, testing a rubric they had collectively generated for how to effectively summarize and respond to an academic text.

The class also observed ChatGPT’s version of the rubric and agreed that there is some value in “giving it a look in the learning process.” But they had decided that their own brain muscles must be developed by grappling with the process of reading and summarizing, synthesizing and analyzing, and learning to take intellectual positions, often across an emotionally felt experience. Our brain muscles couldn’t be developed, the class concluded, by simply looking at content gathered by a bot from the internet, however good that was. When the class finished writing, they shared their often brutal assessment of the volunteer writer’s response to the reading. The class learned by practicing, not asking for an answer.

Beyond the classroom, however, the discourse about artificial intelligence tools “doing writing” has not yet become as nuanced as among my college students. “The college essay is dead,” declared Stephen Marche of the Atlantic recently. This argument is based on a serious but common misunderstanding of a means of education as an end. The essay embodies a complex process and experience that teach many useful skills. It is not a simple product.

But that misunderstanding is just the tip of an iceberg. The current discourse about artificial intelligence not only reflects a shrunken view of education. It also represents a constant romanticization of, or alarmism about, new technologies influencing education. And most saddening for educators like me, it shows a disregard toward students as dishonest by default.

Broaden the view of education Continue reading

Expertise Cycle — Rethinking Faculty Training [Republica Repost]

Published in the Republica on July 26, 2023 08:30 

To truly improve teaching, it is time to take the expert out of training, center professional development back in the classroom, and unleash the power of the practitioner-as-expert–letting such a cycle of expertise replace traditional teacher training.

A lot more teacher training is taking place in Nepal today than, say, ten years ago. In schools and universities, training programs range from informal one-hour sessions run by teachers to formal multi-day ones organized by institutions. They also range from free and virtual gatherings to lavish retreats at fancy places. Unfortunately, this great development remains characterized mostly by lecture–with hands-on practice being an exception.

There is a reason why teacher training remains entrenched in the old habit of delivering lectures. Both trainers and trainees continue to believe that an expert is needed to “deliver” content, that the key objective of training is to increase knowledge, rather than for trainees to learn by doing, sharing, and experiencing.

In reality, there is little to no practical value of content in training. We might as well train farmers how to improve crop yields by taking them to fancy hotels in the city and give them lectures on how to do it. Even simulated activities and discussions are inadequate. Imagine an agricultural expert taking a group of farmers to a sandbank to show them how to use modern farm equipment. Such an expert can teach how to use the tools, but he won’t really show how to grow a crop.

We need a radical shift in how training is done. Training should not only involve participants in doing things and sharing experience, solving problems and creating materials–not lectures or even discussions. It should also happen right in their classroom, as I will describe. A little bit of content may be needed to set up the context, clarify instructions, or during follow up discussion. But if content takes more than a quarter of a program’s time, it is no longer training.

Skipping the expert

One easy and effective way to make training more like training is to get rid of the expert and use a facilitator instead. The less the facilitator has to say the better. The more she makes time and creates opportunities for participants the better. In fact, when the facilitator tells participants that she is not an expert, and that the participants are the experts–in that they are the ones teaching–the training becomes far more effective. In fact, training becomes even more effective when one of the participating practitioners serves as facilitator. All that the facilitator needs is skills for managing the process and fostering collaboration. In exchange for losing the quantity and depth/breadth of knowledge when losing the external expert, such training can gain far deeper grounding in practice and far deeper commitment and accountability among participants. This shift to expertless training does require courage.

Continue reading

Educating Beyond the Bots [Republica Repost]

Published in Republica on February 12, 2023

The current discourse about artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default. 

“It has saved me 50 hours on a coding project,” whispered one of my students to me in class recently. He was using the artificial intelligence tool named ChatGPT for a web project. His classmates were writing feedback on his reading response for the day, testing a rubric they had collectively generated for how to effectively summarize and respond to an academic text.

The class also observed ChatGPT’s version of the rubric and agreed that there is some value in “giving it a look in the learning process.” But they had decided that their own brain muscles must be developed by grappling with the process of reading and summarizing, synthesizing and analyzing, and learning to take intellectual positions, often across an emotionally felt experience. Our brain muscles couldn’t be developed, the class concluded, by simply looking at content gathered by a bot from the internet, however good that was. When the class finished writing, they shared their often brutal assessment of the volunteer writer’s response to the reading. The class learned by practicing, not asking for an answer.

Beyond the classroom, however, the discourse about artificial intelligence tools “doing writing” has not yet become as nuanced as among my college students. “The college essay is dead,” declared Stephen Marche of the Atlantic recently. This argument is based on a serious but common misunderstanding of a means of education as an end. The essay embodies a complex process and experience that teach many useful skills. It is not a simple product.

But that misunderstanding is just the tip of an iceberg. The current discourse about artificial intelligence not only reflects a shrunken view of education. It also represents a constant romanticization of, or alarmism about, new technologies influencing education. And most saddening for educators like me, it shows a disregard toward students as dishonest by default.

Broaden the view of education

If we focus on writing as a process and vehicle for learning, it is fine to kill the essay as a mere product. It is great if bot-generated texts serve certain purposes. Past generations used templates for letters and memos, not to mention forms to fill. New generations will adapt to more content they didn’t write.

What bots should not replace is the need for us to grow and use our own minds and conscience, to judge when we can or should use a bot and how and why. Teachers must teach students how to use language based on contextual, nuanced, and sensitive understanding of the world. Students must learn to think for themselves, with and without using bots.

Continue reading

Unteaching Tyranny [Republica Repost]

It is possible and necessary to use technology to empower and inspire, not be tyrannical. If nothing else, the harrowing global pandemic must help educators come to our senses about the overuse and misuse of authority.


When a fellow professor in a teacher training program said last month that he takes attendance twice during class since going online, I was surprised by the tyrannical idea. What if a student lost internet connection or electricity, ran out of data or was sharing a device, had family obligations or a health problem? We’re not just “going online,” we’re also going through a horrifying global pandemic!

At a workshop on “humanizing pedagogy” for a Bangladeshi university more recently, when asked to list teaching/learning difficulties now, many participants listed challenges due to student absence, disengagement, dishonesty, and expectation of easy grades. When asked to list instructional solutions, many proposed technocratic and rather authoritarian methods. The very system of our education, I realized, is tyrannical and most of us usually try to make it work as it is.

Tyranny, now aided by technology, goes beyond formal education. “You can only fill your bucket if you’ve brought it empty,” said a young yoga instructor in Kathmandu, on Zoom last week. She kept demanding, by name, that participants turned on their video feed. We kept turning it off as needed. Someone kept individually “spotlighting” us on screen. But we were always muted, even as we were constantly asked to respond to instructor questions by chat, thumbs up, hand wave, and smile. Technology magnified autocratic tendencies, undermining the solemnity of yoga.

The quality of yoga lectures and instruction didn’t match the technologically enforced discipline. “Our lungs remove ninety percent of toxins from our body,” said an instructor. Surya namaskar fixes both overweight and underweight, said another, as well as cancer and diabetes. Googling these claims led to junk websites. I quickly became an unengaged learner, waiting for lectures to be over. I read a book on yoga during lectures, or took notes on how technology can magnify tyrannical elements of instruction and academe. I reflected on how to make my own teaching more humane.

This essay is a broader commentary on the element of tyranny in education. But to show that the idea of making teaching more humane is not just a romantic ideal, I share how we can operationalize the concept, including and especially during this disrupted time.

Operationalizing humanity

Continue reading

Making Education Three-Dimensional [Republica Repost]

Published on Oct. 23, 2018

Higher education must be a three-dimensional deal, one that includes acquiring knowledge, developing skills for the workplace, and having meaningful experiences that shape the learner for a lifetime.

Last summer, I had a unique opportunity to visit one of the most successful business families in Dhaka, Bangladesh during an academic trip there, along with another New York professor. The family, one of whose members I had taught here in the States a few years earlier, has an impressive business empire in the country. At one point, when the conversation turned to education, one of our hosts lamented that their company too often had to look beyond Bangladeshi universities for top talent. I asked why?

Graduates of local universities, he said, had solid academic knowledge of the subjects. “But if I give them a business problem and ask how they’d solve it, they give me a textbook answer.” That remark made me think about the challenges of higher education across South Asia for quite some time.

Knowledge isn’t Enough

Analyzing a business situation, one could say, requires skills that can only be learned after joining the workforce. Colleges are designed to impart knowledge, one could argue, to lay the foundation of the disciplines. Indeed, this view of college should not be considered outdated. Colleges should not be asked to just prepare students for jobs; they’re centers of learning that must shape habits of mind and inculcate productive perspectives on society and profession for a lifetime. Job preparation can be done by a career center on campus. Continue reading

International illusions [Republica Repost]

 

Published On:  November 29, 2017

One can only hope that Nepali scholars and policymakers will come back to their senses and start informing the public that English-only instruction is dangerous.

Thousands of Londoners kept dying every year during the early 1800s after the city started draining sewage into the Thames River. This happened because a “scientific orthodoxy” that cholera was caused by “vapor” from the dead, rather than being a waterborne disease, prevented the city from fixing the real problem for decades.

One can hope that Nepali scholars and policymakers will similarly come to their senses and start informing the public that English-only instruction (EOI) is a dangerous social experiment that needs changing. Note the emphasis is on “only”, the culprit in this case.
In the past two essays here, I wrote about the historical and political backdrop and then the dangers plus alternatives of EOI. In this one, I argue that Nepali education must teach other “international” languages as well, if we are sincere about English as a language of international communication and economic opportunities, and not international illusions.

As a bonus, that sincerity could help open gates of new opportunities for our educational institutions and for society. Continue reading