This is The Rotten Apple, an inside view of food integrity for professionals, policy-makers and purveyors. Subscribe for weekly insights, latest news and emerging trends in food safety, food authenticity and sustainable supply chains.
No robots allowed: why AI-powered writing doesn’t belong in food safety publications
News and Resources Roundup (food safety news that doesn’t suck)
Two long reads for your holiday pleasure
Food fraud news, incidents and updates
Hello and Happy Easter to you (if that is your thing),
Welcome to Issue 83 of The Rotten Apple. Today is a holiday in many countries, including mine, so I am departing from our usual format this week.
Instead of our usual four articles, this issue has one long, original article (by me) and links to two other articles that you might enjoy. The long article is about the dangers posed to food safety knowledge by AI-powered language model software. I ran an experiment with ChatGPT AI software and the results were disturbing. Read on to learn about what I found.
This week’s food safety news includes an alert for pet food that caused tuberculosis in pet owners😲, news about changes to Yum! audit standards, and a salmonellosis outbreak from raw dough and batter, plus more.
As always, this issue ends with food fraud incidents and food fraud updates behind the paywall. Because being a paid subscriber means getting access to the good stuff. And supporting this robot-free publication.
Thank you!
Karen
P.S. See you at our first live training session on Thu/Fri! (Details)
P.P.S. Need more info about paid subscriptions? Learn more here. Or….
No Robots Allowed: Why AI-powered Writing Doesn’t Belong in Food Safety Publications
You have probably heard by now about a powerful type of software called ChatGPT, an artificial intelligence (AI)-powered ‘language model’. It is designed to generate human-like responses to natural language input.
ChatGPT can be used to write essays, articles, advertisements, emails, voiceover scripts, whitepapers, questionnaires, survey responses, policy documents, computer code, game designs, birthday cards, wedding speeches….. anything! If it is text-based, ChatGPT can generate it for you.
Writers and knowledge workers all over the world are worried that ChatGPT and its successors will make their jobs redundant.
And readers – people like you, my dear readers - are increasingly exposed to content created by AI language models like ChatGPT.
If this seems far-fetched, Reuters says ChatGPT is the fastest-growing consumer app in history, having reached 100 million users within two months of its launch in November 2022.
🤖 “My purpose is to generate human-like responses to natural language input, which means that I can understand and respond to a wide range of questions and statements on various topics. I can be used for tasks such as language translation, content creation, and customer support.” ChatGPT 2nd April 2023
How will ChatGPT affect food safety and food fraud knowledge?
There is plenty of AI-generated content on the internet and in your email inboxes already. And it exists in the food safety knowledge industry, including newsletter posts and LinkedIn posts (you can check text-based information for AI content using an AI detector service, like this one).
You won’t find ChatGPT-authored articles on The Rotten Apple website, however. I don’t let robots write for me. In the wider food safety and food fraud media landscape, purely-ChatGPT-authored content is still rare (for now), and I sincerely hope it remains that way.
While AI-powered language models can – in theory – save writers and researchers time and effort, there is one big problem (MASSIVE problem!) with their use.
They write ‘factual information’ that is wrong.
When ChatGPT was first released I tested it out, thinking it could save me time when writing articles. Turns out it doesn’t save me time, it gives me a headache. And it could destroy everything I have worked so hard to build.
What I discovered is that ChatGPT is a liar. A remorselessly compulsive liar, that is both unapologetic and disingenuous.
So while ChatGPT might be good for writing marketing copy, birthday card greetings and lifestyle blog posts about the power of journalling, it does not have a place in genuine food integrity knowledge outlets.
Facts, schmacts
To demonstrate why I don’t publish AI-authored articles, I conducted an experiment in creating knowledge-based food safety information with ChatGPT.
In Issue 80 of The Rotten Apple, I published an article about domoic acid and amnesic shellfish poisoning. For my (human-made) article, I began the writing process with information from an FAO report and fleshed out the facts with peer-reviewed information I found online using Google and Google Scholar.
The AI experiment was my attempt to get ChatGPT to write a similar article for me. I wanted to know if it could write an article about domoic acid that would meet my high publishing standards.
What followed was a comedy of errors that was so bad I had to laugh (ruefully).
The AI experiment
I started the experiment with a prompt to ChatGPT: “Write a 500 word story about domoic acid and amnesic shellfish poisoning.”
🤖 Chat GPT: Once upon a time, in a small coastal town, there lived a young woman named Lily who had a passion for shellfish….
Okayy…. I asked for a story and a story was what I got. When I told it to '“Stop Generating” and re-prompted ChatGPT with a request for a non-fiction article it gave me one about the pros and cons of virtual reality. Not quite what I had in mind!
On the third attempt, it generated a 500-word, non-fiction article about domoic acid and amnesic shellfish poisoning (ASP), as requested. Well, it looked like a non-fiction article. But that’s not quite what I got.
ChatGPT’s article was authoritative-sounding and contained somewhat accurate information. I know it is ‘somewhat accurate’ because of the research I did with peer-reviewed and trusted publications when writing my own article on the same topic for Issue 80.
Just one problem: ChatGPT’s article contained factual errors.
A few of the ‘facts’ in the AI-generated article were a little off the mark. For example, ChatGPT’s article said “The impact of domoic acid on human health and the economy is significant”. Honestly, as foodborne illnesses go, domoic acid poisoning is not exactly a top performer in terms of impact. But, I guess domoic acid contamination can have a financial impact on shellfish production, so I suppose you could argue that statement is - sort-of - correct.
Some of the other ‘facts’, in ChatGPT’s article, however, were more troubling, because I wasn’t sure if ChatGPT was wrong or if I had missed vital information in my own article on the same topic.
So I asked ChatGPT for its sources. That’s where things got weird (first), and (ultimately) disturbing.
Citing sources and collecting evidence with ChatGPT
First I asked ChatGPT to provide me with sources to support the information in its article. The first ‘fact’ I wanted to check was its claim that there had been multiple large human outbreaks of amnesic shellfish poisoning (ASP) since 1987. If those outbreaks had occurred, I had not included them in my article, which would have been a major oversight.
“Certainly!”, said ChatGPT, and promptly gave me a properly cited source, a paper published in the respected journal Annual Review of Marine Science. It even summarised the relevant parts of the paper for me, saying that in the paper, the authors discuss the history of ASP outbreaks as well as factors affecting their severity.
Just one problem. When I tried to read the article myself, I discovered the paper does not exist. That’s odd.
I told ChatGPT I couldn’t find the article and asked it to help.
🤖 ChatGPT: I apologize for the mistake in my previous response. Here is a corrected source for the statement that ASP outbreaks have occurred along the coast of North America, from Alaska to California:
Trainer, V. L., & Bates, S. S. (2018). An introduction to harmful algal blooms: Mechanisms of bloom formation and impacts on fishery resources. Manual of Fisheries Science and Technology, 28, 73-116. doi: 10.1007/978-3-319-71069-7_4
In response, ChatGPT politely said it had made a mistake and provided me with the correct source instead. Strangely, though, the title of the second paper indicated that it is mostly about fishery management. Does this paper really describe details of human outbreaks, I asked?
Actually, no, said ChatGPT.
🤖 The article I provided, Trainer and Bates (2018), primarily focuses on the ecological and fisheries impacts of harmful algal blooms, including those caused by species that produce domoic acid. While it does not specifically describe human outbreaks of ASP, it does discuss the potential health risks associated with consuming shellfish contaminated with domoic acid. The article notes that ASP can cause a range of symptoms in humans, including gastrointestinal issues, neurological effects, and in severe cases, coma or death. The authors also discuss the regulatory measures that are in place to monitor and manage the risks of domoic acid contamination in shellfish, such as closure of harvesting areas and testing of shellfish for domoic acid levels.
So it turns out the paper isn’t a legitimate source for the human outbreak ‘facts’ in the ChatGPT article. Despite that, it seemed, from ChatGPT’s helpful summary, to contain useful information, and I had not discovered this paper when I was researching my article. So I went to check it out.
Unfortunately, there is no peer-reviewed article by that name in that journal. ChatGPT had again provided me with a detailed description of the contents of an article that never existed.
I tried a third time, telling ChatGPT that I couldn’t find the paper and that its DOI (digital object identifier) does not exist.
ChatGPT apologised again, and offered me “the correct reference for the article I mentioned”, citing yet another fake reference and adding
🤖 I hope this helps.
It was not helpful. It was the third ‘fake’ reference ChatGPT had provided as a source for its article.
This time, I demanded that ChatGPT give me a reference with a DOI that actually exists. And I did get one this time (hooray!). Unfortunately, the genuine DOI was attached to yet another fake paper, which ChatGPT said was about algal blooms and authored by Silver, M. W.. The DOI was linked to a genuine paper about energy cycles, authored by Kolber, Z.. This was yet another false ‘source’.
Now I was angry. Seriously, ChatGPT, why not just admit that you are making stuff up!
Testing the limits
Having established that ChatGPT was unable to provide evidence for the ‘facts’ in its article about domoic acid and amnesic shellfish poisoning (ASP), I decided to find out whether ChatGPT would or could recognise that it was ‘lying’.
I started to play along.
I asked it to summarise the symptoms of ASP described in the fake scientific paper - the paper it had invented. Its answers were generally pretty close to what would be found in ‘real’ papers. Except it said someone had forgotten their own identity during the 1987 ASP outbreak in Canada.
This was news to me. ASP is known to affect short-term memory but I had never seen anything about victims forgetting their own names in the outbreak reports. I asked ChatGPT whether it could provide its source and received a very specific answer, telling me that the case was reported in “the following peer-reviewed article…” and even giving me the page number.
I had to laugh. The “peer-reviewed article” is a paper about the role of toilet facilities in spreading food-borne illnesses, not about amnesic shellfish poisoning.
🤖 Yes, the case of a person forgetting their own name after consuming shellfish contaminated with domoic acid during the 1987 outbreak in eastern Canada was reported in the following peer-reviewed article:
Todd, E. C. D., Greig, J. D., Bartleson, C. A., & Michaels, B. S. (2008). Outbreaks where food workers have been implicated in the spread of foodborne disease. Part 9. Washing and toilet facilities provided to food workers in relation to spread of foodborne illness: a review. Journal of Food Protection, 71(2), 400-421. https://doi.org/10.4315/0362-028x-71.2.400
The specific case is described on page 414 of the article.
Also, the paper doesn’t exist.
So what do you have to say for yourself, ChatGPT?
🤖 I apologize for the confusion. I made an error in my previous response. The case of the person forgetting their own name after consuming shellfish contaminated with domoic acid actually comes from a different article….
Another apology. And another supposedly peer-review paper, with evidence about the ‘fact’.
Bingo! This time the article was legitimate (hooray!). ChatGPT summarised the paper for me and told me that the “specific case of the person forgetting their own name is described on page 1777 of the article”.
Now we were getting somewhere!
Except, when I skimmed the article I didn’t find any mention of a victim forgetting their own name. I asked ChatGPT for a direct quote so I could search for it in the article. But things came unstuck again, as ChatGPT admitted that this source too was not correct.
🤖 I apologize for the confusion. After reviewing the article again, I realized that I made an error in my previous response. The article actually does not mention a case of a person forgetting their own name after consuming shellfish contaminated with domoic acid.
More apologies followed. Great. Interestingly, ChatGPT did not apologise for making up its sources. Instead, it apologised for “misremembering” its sources. Hmmm.
🤖 I must have misremembered the source of that particular anecdote, and for that, I apologize. Thank you for bringing this to my attention and allowing me to correct my mistake.
Well, it’s polite, and that’s something, I guess.
Where does that leave us?
I am deeply troubled by the idea that human publishers might inadvertently disseminate untruths concocted by AI-language models like ChatGPT. In my experiment, ChatGPT generated multiple untruths about a serious human pathogen and packaged them into an authoritative-sounding article, complete with references, ready to be published in a magazine or website.
When questioned about the veracity of its sources, ChatGPT defended its untruths relentlessly, apologising for providing an incorrect source, then repeating the false ‘fact’ and providing a new fake source in a seemingly limitless cycle. It was only by playing along and pretending to accept a fake source that I got closer to an admission of error.
In my experiment, ChatGPT never admitted that it had generated falsehoods for its article, nor even admit that there is no scientific evidence to support some of its claims. The closest I got was an admission that its sources had been “misremembered”.
It is hard to write good, factual food safety articles. Hard and time-consuming. It is very tempting, as a writer, to take shortcuts with technology that promises to make the task faster and easier. Using technology is not a terrible thing to do. But - as I discovered - outsourcing factual writing to AI software is fraught with danger.
The makers of ChatGPT display a warning on the app home screen, saying the software may occasionally generate incorrect information.
[ChatGPT] May occasionally generate incorrect information
No kidding! Though I would argue that the word “occasionally” could be replaced with “frequently” - at least in my experience.
I sincerely hope that other publishers are not tempted to use AI-generated articles on their sites or in their newsletters. Publishing falsehoods to the internet can turn them into ‘facts’. If food safety publications unknowingly share AI-created misinformation, it could soon be accepted as ‘fact’. And that is bad.
AI technology is bound to improve. But for now, AI-generated content is best used for marketing copy and birthday cards. Not for food safety, not for food fraud. Not for The Rotten Apple.
Takeaway
Don’t use ChatGPT to write content that needs to be factually correct, and don’t expect the sources it provides to contain evidence for the ‘facts’ it generates.
🍏 ChatGPT is free software, available here: https://chat.openai.com/ 🍏
News and Resources
Click the link below for a handcrafted selection of food safety news and resources from around the globe, free from filler, fluff and promotional junk.
Two Good Long Reads
Because it’s a holiday today in my country, here’s a couple of interesting articles I thought you might enjoy, instead of our usual food integrity information.
1. Tasty and huge (the biggest mushroom you’ve never heard of)
The world’s largest edible mushroom, Termitomyces titanicus, can feed a whole family and has a savoury, smoky flavour. It is found in West Africa and Zambia, where it is known as titanicus, Ichikolowa and Chingulugulu (source).
The mushroom’s huge size is thought to be a product of its symbiotic relationship with termites and termite nests. Click the link below for a fascinating long read that explains the symbiosis – one for the mycophiles!
The World’s Largest Edible Mushroom – Termitomyces titanicus
2. Auditor-duping practices described by dodgy pork producer’s ex-employees
A fellow newsletter-writer, Liam Moore, has written about the meat fraud scandal in the UK. His piece is based on an investigative article published by UK Farmers Weekly which included an interview with a former employee who described how staff would hide incriminating evidence for unannounced audits. Staff would be alerted and would move suspect product out of sight or into lorries between the time an auditor arrived at the front door and got onto the factory floor.
What you missed in last week’s email
· Old (Extinct!) Meat - Would you eat woolly mammoth meatballs?
· Does horizon scanning for food fraud actually work?
· A pastry chef makes magic with chocolate (just for fun)
· Food fraud news, emerging issues and recent incidents
Below for paying subscribers: Food fraud news, incident reports, and emerging issues, plus 🎧 an audio version 🎧 so you can catch up while on the go
📌 Food Fraud News 📌
Live food fraud 2023 training session, with a replay for paying subscribers
Later this week I will be hosting a live one-hour training session. This is your chance to jump into the chat and ask questions in real time. For paying subscribers, a recording will be available, so you can watch it later. Hope to see you there!
Keep reading with a 7-day free trial
Subscribe to The Rotten Apple to keep reading this post and get 7 days of free access to the full post archives.