Search Results for: repost

Using AI: Credit, Content, Credibility [Republica Repost]

AI is blurring boundaries of all kinds, so we all need a communicative philosophy to help us set boundaries for content, credit, and credibility when using AI tools. Learning with and from AI is flawed because their content is inaccurate or incomplete due to their extremely narrow datasets. It is also problematic because AI’s algorithms (or thought patterns) are limited to certain discourse traditions and practices. AI content can be unreliable because the tools are mostly designed to “generate” plausible patterns rather than locate and synthesize information (so hallucination is a feature not a bug).


Published in The Republica on May 16, 2024

An image depicting the tension between human judgment and AI-generated content. The scene shows a college professor at a desk with books and papers on one side and a futuristic AI interface on the other, both generating text. The professor is carefully reviewing a paper, symbolizing critical judgment, while a student is standing in front of the desk, holding a paper with a confused expression. Behind the professor, a digital scale balances between the AI interface and the professor's books, representing the balance between AI assistance and human evaluation. The background subtly hints at an academic setting with elements like a chalkboard with complex equations and diagrams, further emphasizing the theme of learning and intellectual growth. The lighting highlights the professor's thoughtful expression and the paper in hand, underscoring the importance of critical thinking and transparency in education.

Faced with machines that seem to (and often actually) match our linguistic abilities, students, professionals, and the general public alike are struggling to maintain boundaries for effective learning, professional relationship, and honest communication. The usefulness of any AI tool is at the intersection between its ability to generate content and our ability to judge that content knowledgeably and skillfully. If a tool generates more than we can judge, we cross the zone of safety and enter the zone of dangers, which risks undermining the value of the content, making the credit we seek undeserved, and threatening our credibility and human relationship.

 AI and credit

Let us begin with learning. Imagine that you were a college professor before the internet, and you learned that one of your students submitted to you a research paper that he had asked his cousin to write. Imagine that he actively guided his cousin to meet your expectations for the top grade. Most likely you would not have given that student the top grade. Now imagine that you are a college professor today, and you just learned that a student has submitted a paper he prompted an AI tool (such as ChatGPT) to write. Imagine that he skillfully used ChatGPT to produce the paper, and the final product meets your expectations for the top grade, while he learned little from the process. Would you give the student the top grade? Continue reading

We’re Hallucinating, Not AI [Republica Repost]

When lawyers lie, doctors kill, or teachers fool and then deflect responsibility to a machine, we must see these problems as the visible tip of humanity’s collective hallucination. 

Published in The Republica on March 17, 2024

We’re Hallucinating, Not AI Cohen, a former lawyer of a former US president was recently caught submitting AI-generated (meaning, fake) legal cases to court. He was appealing to shorten a probation for lying to Congress earlier. But Cohen is not a rare case. Artificial intelligence tools are convincing millions of educated people in highly sensitive professions and across the world, and not just everyday people using them for mundane purposes. Doctors are embracing AI-generated assessments and solutions, and engineers even more.

Most people know that AI tools are designed to generate plausible sentences and paragraphs, rather than just locate or synthesize real information. The “G” in ChatGPT means “generative,” or making up. But AI tools like this are so powerful in processing information in their datasets that they can produce stunningly credible-looking responses that range from the most reliable to the most absurd (called “hallucination”) on a wide range of topics. And they cannot distinguish between the absurd and the reliable: human users have to do that. The problem is that because people are increasingly trusting AI tools without using their own judgment, humanity is becoming the party that is collectively hallucinating, more than AI. Factors like speed, convenience, gullibility, and a seeming desire to worship the mystic appearance of a non-human writer/speaker are all leading humans to ignore warnings that even AI developers themselves are placing everywhere from their log in screens to about pages.

Deflecting responsibility 

Continue reading

Educating Beyond the Bots [Republica Repost]

The current discourse about artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default. 

Published in The Republica on February 12, 2024

Educating Beyond the BotsIt has saved me 50 hours on a coding project,” whispered one of my students to me in class recently. He was using the artificial intelligence tool named ChatGPT for a web project. His classmates were writing feedback on his reading response for the day, testing a rubric they had collectively generated for how to effectively summarize and respond to an academic text.

The class also observed ChatGPT’s version of the rubric and agreed that there is some value in “giving it a look in the learning process.” But they had decided that their own brain muscles must be developed by grappling with the process of reading and summarizing, synthesizing and analyzing, and learning to take intellectual positions, often across an emotionally felt experience. Our brain muscles couldn’t be developed, the class concluded, by simply looking at content gathered by a bot from the internet, however good that was. When the class finished writing, they shared their often brutal assessment of the volunteer writer’s response to the reading. The class learned by practicing, not asking for an answer.

Beyond the classroom, however, the discourse about artificial intelligence tools “doing writing” has not yet become as nuanced as among my college students. “The college essay is dead,” declared Stephen Marche of the Atlantic recently. This argument is based on a serious but common misunderstanding of a means of education as an end. The essay embodies a complex process and experience that teach many useful skills. It is not a simple product.

But that misunderstanding is just the tip of an iceberg. The current discourse about artificial intelligence not only reflects a shrunken view of education. It also represents a constant romanticization of, or alarmism about, new technologies influencing education. And most saddening for educators like me, it shows a disregard toward students as dishonest by default.

Broaden the view of education Continue reading

Expertise Cycle — Rethinking Faculty Training [Republica Repost]

Published in the Republica on July 26, 2023 08:30 

To truly improve teaching, it is time to take the expert out of training, center professional development back in the classroom, and unleash the power of the practitioner-as-expert–letting such a cycle of expertise replace traditional teacher training.

A lot more teacher training is taking place in Nepal today than, say, ten years ago. In schools and universities, training programs range from informal one-hour sessions run by teachers to formal multi-day ones organized by institutions. They also range from free and virtual gatherings to lavish retreats at fancy places. Unfortunately, this great development remains characterized mostly by lecture–with hands-on practice being an exception.

There is a reason why teacher training remains entrenched in the old habit of delivering lectures. Both trainers and trainees continue to believe that an expert is needed to “deliver” content, that the key objective of training is to increase knowledge, rather than for trainees to learn by doing, sharing, and experiencing.

In reality, there is little to no practical value of content in training. We might as well train farmers how to improve crop yields by taking them to fancy hotels in the city and give them lectures on how to do it. Even simulated activities and discussions are inadequate. Imagine an agricultural expert taking a group of farmers to a sandbank to show them how to use modern farm equipment. Such an expert can teach how to use the tools, but he won’t really show how to grow a crop.

We need a radical shift in how training is done. Training should not only involve participants in doing things and sharing experience, solving problems and creating materials–not lectures or even discussions. It should also happen right in their classroom, as I will describe. A little bit of content may be needed to set up the context, clarify instructions, or during follow up discussion. But if content takes more than a quarter of a program’s time, it is no longer training.

Skipping the expert

One easy and effective way to make training more like training is to get rid of the expert and use a facilitator instead. The less the facilitator has to say the better. The more she makes time and creates opportunities for participants the better. In fact, when the facilitator tells participants that she is not an expert, and that the participants are the experts–in that they are the ones teaching–the training becomes far more effective. In fact, training becomes even more effective when one of the participating practitioners serves as facilitator. All that the facilitator needs is skills for managing the process and fostering collaboration. In exchange for losing the quantity and depth/breadth of knowledge when losing the external expert, such training can gain far deeper grounding in practice and far deeper commitment and accountability among participants. This shift to expertless training does require courage.

Continue reading

Educating Beyond the Bots [Republica Repost]

Published in Republica on February 12, 2023

The current discourse about artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default. 

“It has saved me 50 hours on a coding project,” whispered one of my students to me in class recently. He was using the artificial intelligence tool named ChatGPT for a web project. His classmates were writing feedback on his reading response for the day, testing a rubric they had collectively generated for how to effectively summarize and respond to an academic text.

The class also observed ChatGPT’s version of the rubric and agreed that there is some value in “giving it a look in the learning process.” But they had decided that their own brain muscles must be developed by grappling with the process of reading and summarizing, synthesizing and analyzing, and learning to take intellectual positions, often across an emotionally felt experience. Our brain muscles couldn’t be developed, the class concluded, by simply looking at content gathered by a bot from the internet, however good that was. When the class finished writing, they shared their often brutal assessment of the volunteer writer’s response to the reading. The class learned by practicing, not asking for an answer.

Beyond the classroom, however, the discourse about artificial intelligence tools “doing writing” has not yet become as nuanced as among my college students. “The college essay is dead,” declared Stephen Marche of the Atlantic recently. This argument is based on a serious but common misunderstanding of a means of education as an end. The essay embodies a complex process and experience that teach many useful skills. It is not a simple product.

But that misunderstanding is just the tip of an iceberg. The current discourse about artificial intelligence not only reflects a shrunken view of education. It also represents a constant romanticization of, or alarmism about, new technologies influencing education. And most saddening for educators like me, it shows a disregard toward students as dishonest by default.

Broaden the view of education

If we focus on writing as a process and vehicle for learning, it is fine to kill the essay as a mere product. It is great if bot-generated texts serve certain purposes. Past generations used templates for letters and memos, not to mention forms to fill. New generations will adapt to more content they didn’t write.

What bots should not replace is the need for us to grow and use our own minds and conscience, to judge when we can or should use a bot and how and why. Teachers must teach students how to use language based on contextual, nuanced, and sensitive understanding of the world. Students must learn to think for themselves, with and without using bots.

Continue reading

TU is Well [Republica Repost]

Published in Republica on February 11, 2022.

Nepalese academia, including Tribhuvan University, has challenges, but we must tell the full story, including what it is doing well.  


I paused, somewhat sad, while skimming through responses submitted to the weekly reading assignment in a professional development workshop series last December. I was supporting the organizers, an informal network of Tribhuvan University scholars from across the country, as a resource person. One participant, who indicated was a senior scholar, had written that they “of course” didn’t need to “read about this issue … any more.” For the final workshop on “new opportunities for scholars’ professional development,” the task was to read some material provided and do some further research on how to prepare effective applications for scholarship/funding. The prompt said that everyone should share what they learned “whether it is for yourself or for supporting your students.…” This senior scholar’s refusal to read, it seemed, was due to “status issue.”

“Our son has finished reading” (padhisakyo), say our proud parents, meaning that he has completed a degree. “Reading” does refer to “studying” and “finishing” terminal degrees. But the reality that many scholars “stop reading” much once they enter academic careers makes the semi-metaphorical expression look very ugly. Discontinuing to read in a profession defined by lifelong learning is a real shame. Sounding like last year is not what a real scholar should do. This unfortunate condition is partly due to a misguided notion of status but it is also caused by current policy: while scholarship is required for promotion, serious study and production can be bypassed by using various shenanigans. The situation is improving but publication quality can still be skipped, especially by those who are politically active.

However, the reason I write this piece is to show that the above is only one part of the story about Nepal’s academe, including about Tribhuvan University. The rest of the narrative must also be advanced. Let us do that.

Flipping negative narratives Continue reading

Unteaching Tyranny [Republica Repost]

It is possible and necessary to use technology to empower and inspire, not be tyrannical. If nothing else, the harrowing global pandemic must help educators come to our senses about the overuse and misuse of authority.


When a fellow professor in a teacher training program said last month that he takes attendance twice during class since going online, I was surprised by the tyrannical idea. What if a student lost internet connection or electricity, ran out of data or was sharing a device, had family obligations or a health problem? We’re not just “going online,” we’re also going through a horrifying global pandemic!

At a workshop on “humanizing pedagogy” for a Bangladeshi university more recently, when asked to list teaching/learning difficulties now, many participants listed challenges due to student absence, disengagement, dishonesty, and expectation of easy grades. When asked to list instructional solutions, many proposed technocratic and rather authoritarian methods. The very system of our education, I realized, is tyrannical and most of us usually try to make it work as it is.

Tyranny, now aided by technology, goes beyond formal education. “You can only fill your bucket if you’ve brought it empty,” said a young yoga instructor in Kathmandu, on Zoom last week. She kept demanding, by name, that participants turned on their video feed. We kept turning it off as needed. Someone kept individually “spotlighting” us on screen. But we were always muted, even as we were constantly asked to respond to instructor questions by chat, thumbs up, hand wave, and smile. Technology magnified autocratic tendencies, undermining the solemnity of yoga.

The quality of yoga lectures and instruction didn’t match the technologically enforced discipline. “Our lungs remove ninety percent of toxins from our body,” said an instructor. Surya namaskar fixes both overweight and underweight, said another, as well as cancer and diabetes. Googling these claims led to junk websites. I quickly became an unengaged learner, waiting for lectures to be over. I read a book on yoga during lectures, or took notes on how technology can magnify tyrannical elements of instruction and academe. I reflected on how to make my own teaching more humane.

This essay is a broader commentary on the element of tyranny in education. But to show that the idea of making teaching more humane is not just a romantic ideal, I share how we can operationalize the concept, including and especially during this disrupted time.

Operationalizing humanity

Continue reading

Magic Tools and Research Integrity [Republica Repost]

Published in the Republica on March 23, 2021.
Plagiarism is a manifestation of a deeper problem in academia: Of publishing for the sake of publishing, and of rewarding it regardless. 


“Do I need to cite a source if a plagiarism detection tool doesn’t show that I’ve borrowed an author’s words?” asked a participant at a research workshop recently. “I will have to rewrite much of my article if that’s the case.”

I was not surprised. Instead, I started wondering where the question was coming from. In op-eds and other discussions, I’ve seen plagiarism treated as a problem of stealing words (rather than ideas). For instance, in a recent, highly nuanced, proposal for apology as a mode of redemption for those who have plagiarized in the past, the author casually claimed that there are now technological tools for “easily” identifying and preventing cases. Academic leaders and institutional policies alike, I remembered, exude the same incredible hope.

What’s even worse, issues about quality and integrity of research, not to mention its social value and responsibility, are overlooked in discussions of its originality. Across South Asia and the rest of the global south, there is an increasingly misguided focus on the product of publication—rather than on the ends to which it is a means—reflecting what current policies demand and reward. Even when “impact” is talked about, it simply refers to proxy measures of quality of the product, such as the number of citations (which may be mere name-dropping, including one’s own). Indeed, that is what “journal impact factor” means. When “quality” is used explicitly, that too simply means that the venue is “international” (or not locally located) or that the product is in English (instead of a local) language. If these critiques sound radical, it’s because the status quo is absurd. It is because it rewards publications that may have no significant value.

It is not just that someone can reap rewards by simply paraphrasing or summarizing others’ ideas. They can also make progress by fabricating or manipulating data. Either way, the magic of technology fails whenever scholars fail to ask what specific tasks specific technologies can do and how, where they can be bypassed, what to learn from using them.

Continue reading

Advancing research for social impact [Republica Repost]

The best frameworks for advancing socially impactful research can be created at the intersection of the grassroots efforts and institutional programs.

At the end of an intensive two-day training on semester-based teaching organized by Tribhuvan University in Nagarkot last May, we asked the seventy or so university faculty members from around the country to pick between two groups. One group was asked to  strategize how to institutionalize the teaching excellence training, building on a few years of work done mainly online by a grassroots initiative. The other group would discuss a new topic: how to pursue and promote research and publication in the university. To our pleasant surprise, over two-thirds of the participants went to the research group. We had to come up with a quick way to better regroup the scholars.

We have observed increasing numbers of training and discussions that focus on research, writing, and publication in and beyond our universities in recent years. It is not just that the current lockdown has afforded more time to university professionals. The current trend builds on a strong momentum that we have seen in the last few years, in both the public and private institutions, in the capital and across the country. The incident above was a manifestation of a broader momentum.

Even better, we have observed an increased interest in making research and publication more socially impactful, especially among younger scholars. A recent article about UGC-supported grants showed that while the quality of our research and publications remain concerning, younger researchers are publishing stronger scholarship.

In this essay, we highlight two complementary dimensions for advancing research and publication for greater social impact: a community and culture of research and a policy framework that can foster the culture. The proposed framework would provide incentives to scholars, as well as realigning institutional priorities and accountability mechanisms, for making academic research more driven by social needs.

Continue reading