Welcome back to the Ink Sink, the publishing podcast for the rest of us!
Annie kicks us off with a description of the digital landscape mid-singularity. OK, that’s a little premature – we aren’t there yet.
(The singularity is the long speculated point at which technology advances to such a degree that it is irreversible and impacts our development as a species unpredictably.)
Alright, so what are we talking about today? Artificial intelligence! What’s going on with AI, you might ask.
(If you aren’t asking that because you’ve been tuned into the Ink Sink for the last couple of months, then you already know what’s going on. And thank you! Glad to have you back!)
Explosions! Floods! Hyperbole! This episode has it all!
There’s been an explosion of apps that take prompts and spit out a text or image built with bits and bobs of reassembled code from their large databases.
This has had an immediate impact on professional and academic publishing, both practical and theoretical. Some workers are adopting AI such as ChatGPT (the best-known app at the time of this episode) to generate content for low-stakes writing like realty listings, movie summaries, and press releases.
Academically, schools have had to address how much students are allowed to use these AI tools for their work and to establish the consequences if they abuse them.
Outside of content generation, how does AI affect the publishing world? Well, the content generated by these bots comes out in a combination that seems correct, but may actually be misleading or entirely fabricated. And that can really affect industries that rely on facts and good faith actors presenting accurate content.
In a roundup, The Morning Brew cites a Reuters article on Amazon Direct Publishing, an independent publishing app that self-publishers use. According to the article, authors have been submitting a substantial number of ChatGPT-aided works.
Some might even call it a “flood” – the author of the Reuters article certainly did. But what actually constitutes a flood? We dug into the numbers, and ChatGPT was listed as the author or co-author of about 200 books over roughly four months.
People do not have to show when their work is AI aided, and there are likely additional works that have not attributed ChatGPT or other apps.
But we need to put those numbers into perspective: Amazon Direct reports 1.4 million ISBN-assigned works every year. That means that around 0.5% of the works published since November were ChatGPT assisted.
Not every Amazon Direct work gets an ISBN, so those numbers are slightly squishy. But it’s likely still not even 1%. Maybe we could downgrade this one from “flood” to “trickle”? “Drizzle”?
To be fair, at least one magazine has tweeted that it had to stop accepting submissions due to the amount of low-quality articles that it was receiving (remember that this is an industry in which proofreaders and copy editors are already stretched thin in trying to weed out poorly written or bad faith material).
Unfortunately, the magazine didn’t cite its criteria for figuring out what was AI-assisted or fully generated content, so we can only guess here. Either way, an uptick in low-quality articles helps no one.
We expect too much of AI
Critical thinking? Who is she?
Elsewhere in the publishing world, Editor and Publisher references a recent report of discord at the technology magazine CNET around its “quiet” use of AI. CNET was (previously) using AI to generate content, but that information was not always made known to the magazine’s meatspace employees, nor was the content necessarily, you know, correct.
CNET has not chosen to respond to those questions yet, and we’re sure its reasons are perfectly sound.
The Guardian also writes about this phenomenon, and not just the ethical and legal concerns around the content used to train AI, but about the nature of AI itself: The company was contacted about an article ChatGPT referenced from one of its writers. But that article had never actually been written.
So not only was the ChatGPT writing false, it was false in the worst way: AI was generating content that sounded right but wasn’t remotely correct.
And, as Duke University points out, that’s what it was designed to do.
So, it turns out, AI can’t actually employ critical thinking, just style regurgitation. Maybe time to step back? CNET thought so.
Of course, not all tech firms are demurring. Press Gazette recently reported on BuzzFeed’s newest venture, a “Choose Your Own Adventure” approach to content. Insert your own adorable little character flaw and get a personal article of ill-defined nature, just for you!
Basically, Buzzfeed said: “Did you say AI only does bullshit? We love bullshit!”
(If you want more context, we discussed the fall of BuzzFeed Investigations in 2022.)
Hard news writers nervous about AI taking their jobs can probably set that specific concern aside for a bit.
That said, there are plenty of other concerning reports hitting our feeds.
The new corporate owners finally shut BuzzFeed News down. While it seems like that’s been a long time coming, BuzzFeed isn’t the only one struggling. We are seeing other news organizations trying to restructure as of this transcription. NPR has reported on Vice Media’s bankruptcy filing and the shuttering of Vice World News (the international journalism branch) following the CNN-reported layoffs as it closed Vice News Tonight in April 2023.
On the other side of the publishing industry, What’s New in Publishing put out a piece by Bo Sacks on his fears about AI being given additional responsibilities outside of spellcheck. He worries that style, understandability, and accessibility will all take a hit as we go further down this rabbit hole.
Consider our transcripts: if you’ve listened to any of the episodes we’ve transcribed, you’ll know that we are not presenting our episode in a direct translation but rather, we will paraphrase ourselves in such a way that we are accessible in the written word (which is very different from the original medium). And we hope that our tone and demeanor still come across.
Could AI do that? Sacks’ concern is that it couldn’t. And that seems very real and very reasonable. We joke about AI capabilities right now, but we genuinely don’t know what the tech will be capable of down the line.
We also take a brief segue to remember the naive time (oh, the 1940s) when we thought automation might mean we could work less and still earn a living.
What a laugh we had.
(Disclaimer: we did not actually laugh, there was a brief tangent in which we ranted about one of the responses from the Wall Street Journal to the recent price gouging in grocery stores: if you’re mad about the price of eggs, just stop eating breakfast.)
Setting aside all of our boring practicalities and selfish desire for sustenance, AI is everywhere!
But where is the bot that can reliably spot the fingerprints of other AI?
We need a program to fight for the users now more than ever.
To end on a slightly less depressing note: Annie wants foresight as her superpower (with a side of telekinesis) so she can see the future coming.
And Kali wants to be the mistress of magnetism. Because magnets! How do they work?
What is the most incredible superpower to you, and why? Let us know in the comments or reach out to us at InkSinkPodcast@gmail.com.