28.2 C
Lagos
Saturday, June 28, 2025
spot_img

AI’s threat to journalism, scholarship, and self-expression

By Farooq A. Kperogi

On June 16, the final day of my three-week visit to Nigeria, I met with my dear friend Malam Ali M. Ali, Managing Director of the News Agency of Nigeria (NAN). Malam Ali, who was my editor at Kano State Government’s Sunday Triumph in the late 1990s and with whom I had related since my first year at Bayero University in the early 1990s when he was features editor of the Daily Triumph, unexpectedly sprang a surprise: he asked me to deliver an impromptu talk to NAN editors on the pleasures and perils of artificial intelligence in journalism.

Malam Ali, a writer and orator of extraordinary talent who had taken a keen interest in me since my late teenage years in Kano, regularly publishing my opinion articles in the Daily Triumph, had such solid faith in my abilities that he believed I could handle the topic adeptly even without prior preparation.

I think it was because he knew that, lately, my scholarly research has increasingly centered on the intersection of artificial intelligence and journalism. Earlier this year, for instance, I collaborated with Mr. Azubuike “Azu” Ishiekwene, Editor-in-Chief of Leadership Newspaper, to co-author a scholarly article titled “Light in a Digital Black Hole: Exploration of Emergent Artificial Intelligence Journalism in Nigeria,” published in the Journal of Applied Journalism and Media Studies.

The article investigates the initial practices and growing adoption of AI technologies within the Nigerian journalistic landscape.

I also have formal certification in detecting AI writing and have also systematically studied the unique stylistic imprints of AI writing on my own. I can tell with 99% confidence when a piece of prose is AI generated.

I shared a number of insights with NAN editors in a lively and substantive discussion on the legitimate, innovative applications of AI in journalism, as well as its ethically fraught and self-defeating misuses.

On the productive side, AI can assist with distilling complex reports, interpreting intricate statistical data, suggesting headlines, enhancing grammar and style without erasing a writer’s distinctive voice, and even sparking story ideas.

Yet, in recent months, I’ve noticed a troubling trend: more and more journalists are outsourcing not just their writing but their thinking to AI. The outcome is often dull, colorless, formulaic prose that reflects the homogenizing influence of American English and its creeping linguistic imperialism.

But an even more troubling, frighteningly insidious AI-driven trend is beginning to take root in Nigerian journalism, and it is that entire news reports are being fabricated without any human reporting and AI chatbots are inventing sources out of thin air.

Take, for instance, a June 23 news report by the online publication Business Hallmark titled “2027: President Tinubu may dump Shettima to achieve ambition.” The story contains the line: “Political analyst Farooq Kperogi described it as a ‘high-stakes chess game.’” The problem? I never, ever said that either to the publication or in any of my commentaries. It’s a textbook case of what’s called AI hallucination.

The entire piece reads like it was entirely machine-generated. From its staid, clunky prose and formulaic structure to the questionable attributions, there was no way a Nigerian journalist could have written it.

One can easily imagine the writer feeding a prompt into ChatGPT asking it to “find Nigerian commentators on 2027 political scenarios,” then copying and pasting the results without verification.

This problem isn’t confined to journalism alone. In June 2023, I received a Google Scholar alert notifying me that two Pakistani scholars had cited my work. Their article, titled “Insights of Mystical, Spiritual and Theological Studies: The Interplay of Media and Political Islam in Pakistan—A Critical Evaluation of Farooq A. Kperogi’s Book,” immediately raised red flags.

The very first sentence of the abstract reads: “This paper critically evaluates Farooq A. Kperogi’s book, Media and the Politics of Islamization in Pakistan, within the context of the interplay of media and political Islam in Pakistan.” But I never wrote any book by that title. So, they reviewed a book I didn’t write, on a subject I’ve never researched, published on, or even commented on.

It was a phantom book, conjured out of thin air, almost certainly the product of an AI tool like ChatGPT. It is another case of human-engineered AI hallucination.

I’ve never set foot in Pakistan, and I’ve never written a single word, scholarly or otherwise, about it. Yet this fictional book fraudulently attributed to me supposedly exists on Amazon, complete with a non-functional link in the article.

To compound the absurdity, the review misidentifies me as an associate professor, an outdated title that ChatGPT and other AI systems used to assign to me because their training data had cut off before my promotion to full professor.

I brought the academic misconduct of the authors, Ali Hussain and Abid Hussain, to their attention as well as to that of the journal’s editor. They responded with an apology and assured me that the article would be withdrawn. Yet, more than a year later, it remains publicly available on the journal’s website.

Even more troubling, this year alone, I’ve found at least three published articles by Nigerian academics—also in the kinds of fraudulent, pay-to-play journals I’ve written about several times in the past—that cite nonexistent books and articles supposedly written by me. The authors know who they are. I’ll spare their names for now.

If academics are publishing wholly AI-generated articles in shady journals to move up to the next ladder, who moral right do they have to discourage students from outsourcing their writing and thinking to AI chatbots?

I teach my students—and I shared this with NAN editors— not to use AI to write because I tell them that AI erases their stylistic singularities.

When Turnitin flags my students’ writing as AI generated and they contest it, I take the trouble to parse the style, cadence, turns of phrase, stereotyped vocabulary, and punctuational quiddities of the AI writing flagged in their submissions.

I then show how natural human writing (especially by an undergraduate) is unlikely to follow those patterns with such exactitude. They almost always give up, fess up, and ask for a second chance, which I give to first-time offenders.

Plus, I have a list of recurrent AI phrases, frozen phraseology typical of all AI writing, punctuational idiosyncrasies that human writing is unlikely to produce at regular, predictable intervals, etc. Since, in any case, I read every word they write in order to give them feedback on grammar, style, tone, and substance, looking out for AI-generated content is actually easy peasy lemon squeezy, as American kids say.

After a bad experience with a previous graduate class I taught, I introduced an “AI detection in writing” module, where we read articles about tell-tale signs of AI writing, research the decline in the cognitive abilities of people who are AI-dependent in their writing and thought-processes, etc.

It worked wonders! Students developed an impressively stone-cold disdain for AI writing, heightened their sensitivity to AI-generated writing (so that months after the semester, some of them still send me the AI-generated content they encounter on social media and elsewhere, complete with laughter emojis) and, of course, no one used AI to write.

I encourage them to use AI to generate ideas, develop outlines of ideas, and to create rough drafts that serve as inspiration to create their own work, but not as a substitute for writing and thinking.

It’s a well-worn cliché in literary studies that “style is the man.” The cliché owes debts to French theorist Georges Louis Leclerc Buffon who famously said in a 1753 lecture, “Le style c’est l’homme même,” that is, “The style is the man himself.”

In other words, our style is an expression of our individuality, a reflection of our characteristics, an attribute of our personality.

Oscar Wilde vernacularized it best when he wrote: “I don’t wish to sign my name, though I am afraid everybody will know who the writer is: one’s style is one’s signature always.”

So, when people write, they are first of all giving expression to their thoughts before they are communicating with others. And their style of writing is as unique to them as their gait, their sartorial choices, their manner of smiling, their handwriting, etc.

AI is sadly erasing all that. Maybe, we’re entering an age where “the prompt fed into AI chatbots is the man.”

Everyone knows that social media platforms are now suffused with AI-generated slop. Authentic self-expression is now dead. I typically stop reading social media updates after seeing the first tell-tale stylistic imprints of AI-generated prose.

People who used to write English that reflected their background and learning suddenly now write error-free English with native-speaker flair—and, of course, idiosyncratic AI phraseology. They are, in other words, stylistically dead.

We’re getting to the point where the only markers of authenticity in written work will be the appearance of errors (of expression and omission) in texts. Of course, even that can be gamed: Get ChatGPT to write your prose and then sprinkle it with intentional errors in strategic places to lend it “authenticity.” Ha! This is a wild new world we’re in!

 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles