President Joe Biden demanding boba. A children’s book for sale on Amazon. A trailer for “The Simpsons” if it were an ’80s sitcom.
Content generated by artificial intelligence is quickly becoming a mainstay on major internet platforms, wreaking havoc on some that aren’t ready to take on what is sometimes called “synthetic media.”
Although some creators are relishing the comedic potential of generative AI tools and racking up millions of views across social media platforms, others like magazine publishers are struggling to deal with the flood of unusable AI generated content.
Clarkesworld, a sci-fi magazine fueled entirely by stories submitted by external writers, was forced to close its online submission portal after AI-generated submissions skyrocketed last month.
“It has buried our workload and buried the submissions we were interested in reading,” said Neil Clarke, editor-in-chief of Clarkesworld.
“It’s like trying to have a conversation with somebody in a room and a small horde of screaming toddlers wander in,” Clarke added.
AI-generated content has also been banned in some instances. Getty Images and Shutterstock said in October that they were banning visual art created by AI. In January, a top AI conference said AI-generated papers were prohibited.
First-ever A.I. legal assistant makes its debutMarch 1, 202306:07
Some platforms have rules about AI content, but not many. Google welcomes AI-generated content as long as manipulating rankings in search results isn’t its primary purpose.
The question some AI researchers say they are grappling with is whether AI-generated content is enhancing internet creativity or squandering it.
“We have very little control over how AI is being deployed on the web and how people are building products,” said Maggie Appleton, a product designer at the AI research lab Ought who has written about what AI content could mean for the internet.
“The challenge is how we decide to have agency over all this as it unfolds and the way AI is used in society,” Appleton said.
For now, there are few if any rules on major internet platforms about AI-generated media.
On YouTube, some users have uploaded dozens of videos of entertainment genre mashups like “Star Wars as a 70s Spaghetti Western” or “Arabian Psycho” created by AI platforms like Midjourney and Stable Diffusion.
Brett Schickler, a salesman from Rochester, New York, said he recently published a 30-page children’s book using ChatGPT3, an artificial intelligence chatbot that has proven adept at producing written work mimicking a variety of genres in response to simple prompts.
Amazon’s Kindle store had over 200 e-books listing ChatGPT as authors or co-authors as of mid-February, according to Reuters.
“I probably created the whole book in under one hour,” Schickler said.
And while his first book didn’t sell more than a few digital copies, Schickler’s TikTok account racked up hundreds of thousands of views with videos detailing how to leverage ChatGPT3 and Kindle Direct Publishing to become an author with less of a creative hassle.
Also on TikTok, videos created using AI-powered voice tools have taken off in recent months, with people generating videos of celebrities and politicians having all manner of conversations.
Comedian and social media creator Elyza Halpern started a social media series called “Joe & Barry,” in which she impersonates outlandish comedic conversations between Biden and former President Barack Obama. Recent videos have included a disgruntled Biden wanting a boba drink and complaining that Beyoncé got snubbed for album of the year at the Grammys.
Although the dialogue between the two presidents is silly and far-fetched, the voice pitches of the duo are eerily spot-on. Halpern said she pays $5 a month for an app called Celebrity Voice Changer to create her videos.
While Halpern said AI tools in the comedy community create unique and expansive opportunities, she also fears for her friends who do impersonations for a living.
“It’s terrifying for many creators when businesses used to hire real humans to create things. They will now think they can get away with using AI,” she said. “It’s really like a double-edged sword.”
AI programs work differently depending on the kind of content they generate, but they do share some similarities, most noticeably in how consumers use them. Users come up with prompts or ideas that they then send to the AIs, which produce pieces of media by drawing on large data sets they have been trained on.
Schickler said it’s like having a conversation with someone — going back and forth with the machine discussing the book’s plot and characters, with the story eventually coming together. The same prompting technique was necessary with MidJourney, the AI program Schickler used to generate illustrations for the book. The process was simple, but it still required some level of creativity, he said.
Although there are detection tools to help sift through AI-generated content, their effectiveness is mixed. Clarke said such tools are less than 50% accurate in his experience, leaving his team to sift through the submissions themselves.
Looking to the future of AI-generated technology, Appleton said she expects an expansive yet difficult internet world to navigate.
“There’s a lot of scary things for the human condition,” she said. “I think we will have access to more information, but it will be harder to know what was actually human.”