Finding the fakes
How AI is making it harder to tell the truth.
“Computers: Are they conquering our world?”
That was the title of an article published in Christian Courier (CC) more than 40 years ago. “The possibilities for manipulation of people,” columnist and labour expert Harry Antonides wrote in 1982, “by controlling and slanting information and news via the new systems of communication are staggering.”
He was right.
These days artificial intelligence (AI) software is able to create unique images, text and video with no more than a few prompting sentences. A chatbot engine called ChatGPT might be the most popular AI content creator at the moment, but there are many more available online. Some of the blog posts you’re reading these days were generated by Jasper or SEO.ai. Some writers are using Quillbot to paraphrase quotes, or StoryLab to outline articles. Tweets that sound authentic might have been created by a program like TweetMe that reads a user’s past one-liners and creates more of the same. The easily accessible un-human content creation tools are endless.
Fake science
The Eurasia Group, a U.S.-based international business consulting group, lists artificial intelligence as one of the three biggest risks our world faces in 2023. Their report predicts that within the next year, programs like ChatGPT will be able to reliably pass the Turing test, meaning that humans won’t be able to tell the difference between AI generated text and human writing.
These predictions are supported by a Northwestern University study released last month. In the experiment, scientists were asked to review abstracts for academic papers and determine which ones were written by AI and which were real. The study participants could only spot the fakes 68 percent of the time, partly because ChatGPT was very good at giving the correct fabricated sample sizes for each type of study. For instance a fake study on hypertension had a much bigger participant pool than a report on monkeypox, which is a rarer illness.
Even humans don’t sound like ourselves all of the time. Almost half of the misidentified abstracts were written by humans, but attributed to robots.
“This is concerning because ChatGPT could be used by paper mills to fabricate convincing scientific abstracts,” says one of the study designers, Dr. Catherine Gao. Paper mills produce fraudulent academic research for profit and AI could increase their output exponentially. Our world’s infodemic is about to get a lot worse.
Defining intelligence
To address these concerns, the study authors recommend that publication editors start using AI output detectors to weed out fabricated data. Even if humans can’t find the fake, researchers noticed that free online programs like gptzero.me and Crossplag are able to catch AI-generated text without fail.
“There are already several systems designed to detect whether a passage was written by ChatGPT3,” explains Allen Plug, a Canadian philosopher working in the AI industry. “The fact that these have appeared so quickly suggests that ChatGPT-produced-prose contains identifying features.” For this reason, Plug says he doesn’t think that ChatGPT passes the Turing Test. But he also says that we need to come up with a better way to determine whether or not an artificial system is actually intelligent.
“Mimicking how a human would respond is different than demonstrating that the system actually understands,” Plug told CC. “A good test of intelligence isn’t that a system produced the same answer that a human would; rather, a good test would examine why and how the system arrived at that answer.”
Digital hallucinations
Even though AI is better at detecting fraudulent articles than humans are, we’re still the only ones who know what to do next. Programs like ChatGPT spit out absolutely incorrect information all the time, especially when we’re asking for specific information that goes beyond the AI’s training.
“It’s not for everyone,” explains Michael Krakowiak, a web developer who works for Citizens for Public Justice. “As long as you know what’s missing or incorrect, ChatGPT can usually improve upon its answer.”
For example, when asked to write a poem about Lionel Messi, ChatGPT strung together several beautiful verses about soccer and then slipped up referring to the Argentinian’s “golden wrist.” But with a quick human-made edit, swapping “wrist” for “foot,” the poem was perfect.
Hallucinations like this are common in AI generated content, and Google knows it. Websites that churn our spammy AI generated content don’t show up at the top of google searches and their traffic usually plummets.
Glitches and glaring errors
But when well-established websites like CNET started relying on AI generated articles it’s harder for Google to isolate and penalize the unreliable data. In November 2022, this popular tech and business platform started using AI to write dozens of articles. The fine print at the bottom of the robot-written text said: “This article was generated using automation technology and thoroughly edited and fact-checked by an editor on our editorial staff.” But that didn’t stop them from publishing a misleading article about how compound interest works. It’s since been fixed after another publication called them out. Editor-in-Chief Connie Guglielmo published an explanation calling the platform’s new use of AI an “experiment” with “pragmatic” motivations. Certainly more publications are going to start looking to AI as a solution to tight budgets and overworked writers.
As for Christian Courier, you can rest assured that this paper is still entirely human-written. We like writing too much to outsource it. But while we watch what’s happening in our industry, we ponder the same things that Harry Antonides did more than 40 years ago: “The question is not whether the computer can be used for good. It obviously can. But what becomes man’s role; what is the purpose of his life?” Gendered language aside, this is what we’re still asking. Thank God for philosophers, web developers, engineers and educators who are asking these questions right along with us.
The future of work and AI
Web developer Michael Krakowiak shares a few more initial thoughts about ChatGPT:
ChatGPT and similar AI software can potentially change how we work and help automate some tasks. I have noticed that these tools are helpful with “blank page paralysis.” They can help us overcome overwhelm or even procrastination to some extent when beginning to work on a project. Of course, you need to know the topic you’re asking the AI about to verify the information produced by it.
Whether the requested result is text content or code, it is helpful to be aware of the system’s limitations. For example, ChatGPT heavily relies on probability systems and attempts to give the most likely correct answer. It has been trained on information available before 2022, so any new coding practices or changes to frameworks that happened in 2022 or this year will not be reflected in the answer. If you’re asking ChatGPT to generate snippet of code for WordPress, for example, it may have to be checked for best coding practices, security issues and accessibility. It could provide plain wrong information, too.
I look forward to seeing what the future has in store for ChatGPT and other AI systems. I believe they will become increasingly popular tools to improve workflows for professionals. I remember when my mom worked in an accounting department in the 80s. I recall numerous folders of bookkeeping data on bookshelves in her office and a massive calculator on her desk. Now she works with specialized accounting software on her computer (or maybe even “in the cloud”). The bookshelves and the giant calculator are long gone, and she does more in less time. I am sure that AI systems will help us make a similar leap in how we work.