Using AI: Credit, Content, Credibility [Republica Repost]

AI is blurring boundaries of all kinds, so we all need a communicative philosophy to help us set boundaries for content, credit, and credibility when using AI tools. Learning with and from AI is flawed because their content is inaccurate or incomplete due to their extremely narrow datasets. It is also problematic because AI’s algorithms (or thought patterns) are limited to certain discourse traditions and practices. AI content can be unreliable because the tools are mostly designed to “generate” plausible patterns rather than locate and synthesize information (so hallucination is a feature not a bug).


Published in The Republica on May 16, 2024

An image depicting the tension between human judgment and AI-generated content. The scene shows a college professor at a desk with books and papers on one side and a futuristic AI interface on the other, both generating text. The professor is carefully reviewing a paper, symbolizing critical judgment, while a student is standing in front of the desk, holding a paper with a confused expression. Behind the professor, a digital scale balances between the AI interface and the professor's books, representing the balance between AI assistance and human evaluation. The background subtly hints at an academic setting with elements like a chalkboard with complex equations and diagrams, further emphasizing the theme of learning and intellectual growth. The lighting highlights the professor's thoughtful expression and the paper in hand, underscoring the importance of critical thinking and transparency in education.

Faced with machines that seem to (and often actually) match our linguistic abilities, students, professionals, and the general public alike are struggling to maintain boundaries for effective learning, professional relationship, and honest communication. The usefulness of any AI tool is at the intersection between its ability to generate content and our ability to judge that content knowledgeably and skillfully. If a tool generates more than we can judge, we cross the zone of safety and enter the zone of dangers, which risks undermining the value of the content, making the credit we seek undeserved, and threatening our credibility and human relationship.

 AI and credit

Let us begin with learning. Imagine that you were a college professor before the internet, and you learned that one of your students submitted to you a research paper that he had asked his cousin to write. Imagine that he actively guided his cousin to meet your expectations for the top grade. Most likely you would not have given that student the top grade. Now imagine that you are a college professor today, and you just learned that a student has submitted a paper he prompted an AI tool (such as ChatGPT) to write. Imagine that he skillfully used ChatGPT to produce the paper, and the final product meets your expectations for the top grade, while he learned little from the process. Would you give the student the top grade? Continue reading

Magic Tools and Research Integrity [Republica Repost]

Published in the Republica on March 23, 2021.
Plagiarism is a manifestation of a deeper problem in academia: Of publishing for the sake of publishing, and of rewarding it regardless. 


“Do I need to cite a source if a plagiarism detection tool doesn’t show that I’ve borrowed an author’s words?” asked a participant at a research workshop recently. “I will have to rewrite much of my article if that’s the case.”

I was not surprised. Instead, I started wondering where the question was coming from. In op-eds and other discussions, I’ve seen plagiarism treated as a problem of stealing words (rather than ideas). For instance, in a recent, highly nuanced, proposal for apology as a mode of redemption for those who have plagiarized in the past, the author casually claimed that there are now technological tools for “easily” identifying and preventing cases. Academic leaders and institutional policies alike, I remembered, exude the same incredible hope.

What’s even worse, issues about quality and integrity of research, not to mention its social value and responsibility, are overlooked in discussions of its originality. Across South Asia and the rest of the global south, there is an increasingly misguided focus on the product of publication—rather than on the ends to which it is a means—reflecting what current policies demand and reward. Even when “impact” is talked about, it simply refers to proxy measures of quality of the product, such as the number of citations (which may be mere name-dropping, including one’s own). Indeed, that is what “journal impact factor” means. When “quality” is used explicitly, that too simply means that the venue is “international” (or not locally located) or that the product is in English (instead of a local) language. If these critiques sound radical, it’s because the status quo is absurd. It is because it rewards publications that may have no significant value.

It is not just that someone can reap rewards by simply paraphrasing or summarizing others’ ideas. They can also make progress by fabricating or manipulating data. Either way, the magic of technology fails whenever scholars fail to ask what specific tasks specific technologies can do and how, where they can be bypassed, what to learn from using them.

Continue reading

Generalizing Generations–Here We Go Again

A quick, fun post.

Since I read about a dozen books on this subject when writing a seminar paper in a popular culture course during graduate school (around 2009)—-including Dan Tapscott’s Growing Up Digital and others that categorized and generalized younger generations—-I had been itching, fretting, impatiently waiting to learn what would come after generation “Y.” I’ve had sleepless nights thinking about different possibilities.

Finally, there it is: it is called the Generation Z (the now young people born around 1995). We’ve started seeing a plethora of articles (books are coming) about this group of humans, most of the writers first generalizing up to their necks and then more or less quickly cautioning readers against generalization, most of them painting the new generation as distinct, some going uber optimistic, and others essentially focusing on how to monetize our understanding of the new human species.

Exactly what I was waiting for. Continue reading

Technomagicology

Technology doesn’t make people mindless. What makes them lose their senses is their obsession with whatever is “new” or “advanced,” their simplistic claims and thinking about it, their disregard of (the complexity of) related issues in life and society.

Technological magic thinking is no better than other types of magic thinking — like fancy new religions, denial of science, or absurdly exaggerated health benefits of exotic fruits. This type of thinking makes people forget, for instance, to do any research on the subject, to test the tool being touted, or the fact that human people have for very long time used highly “advanced” technologies like pencil and paper.

Technomagicology makes people not use basic critical thinking; more insidiously, it makes them consider individuals and societies not using their kind of technology to be “behind” or even “backward”; it makes them forget their trade and focus on the tools. Think about a farmer who loves to get on his tractor trailer and go on the highway, or an artist who produces more self-serving discourse about her art than art itself.

To give you a concrete example that I recently came across, it makes them make arguments (about a “Universal Translator”) as in the story below.

Continue reading

Putting Everything On the Line?

Reposting (for access) Part I of a series of blog posts by Chris Petty and me from RhetComp@StonyBrook–

 Part I
Part IV…

Putting Everything On the Line? Optimizing the Affordances, Minding the Pitfalls

Shyam Sharma and Christopher Petty

Especially after the advent of web 2.0 applications, the landscape of teaching writing is drastically changing. In many ways, writing teachers greatly benefit by moving into web-based, increasingly shared, and peer-involved practices especially at the post-secondary levels. New developments in technological applications are allowing highly effective pedagogical practices to develop. However, technocratic arguments founded on the positive affordances of new technologies can also be taken too far.

In this context, we wanted to write a brief series of blog posts that will describe and discuss some of the educational/pedagogical benefits and also pitfalls of using web applications and shared spaces for providing instructor feedback to students’ writing, for engaging them in peer review, and for promoting collaboration in college writing courses. These discussions will go along with somewhat corresponding videos (which will be included in a separate section in the Writing@StonyBrook portfolio) that demonstrate how to effectively use collaborative and interactive spaces and tools such as wikis, cloud-based documents, blogs, and portfolios. Continue reading

Reverse Hacking Education

Disruption, reinvention, and even hacking have become very common themes in the discourse of higher education lately. As I sit down to write this post on the theme of hacking (for the fourth week of #clmooc, a connected learning community/course), I am thinking about how buzzwords tend to carry truckloads of irony and how often those who jump on the bandwagons of buzzwords with the most excitement don’t realize (or care about) the complexity of their words’ meanings, origins, uses, and effects. As a teacher, I am fascinated by the educational potential of irony and paradox in buzzwords, grand narratives, and over-sized belief systems. So, I often ask students to unpack the irony/paradox of overly popular words and their collocations. The intersection of the word “hack” and “education” seems so rich that I want to write an essay on it myself.

Assignment X: “Pick a word or phrase that has become unusually popular in the field of education (or another broad subject/context of your interest), study its meanings, origins, uses, etc, and write an essay in between 1000 and 2000 words.” Continue reading