Artificial intelligence can now tell tales featuring your kids’ favorite characters. It’s copyright chaos—and a major headache for parents and guardians.
The problem with Bluey is there’s not enough of it. Even with 151 seven-minute-long episodes of the popular children’s animated show out there, parents of toddlers still desperately wait for Australia’s Ludo Studio to release another season. The only way to get more Bluey more quickly is if they create their own stories starring the Brisbane-based family of blue heeler dogs.
Luke Warner did this—with generative AI. The London-based developer and father used OpenAI’s latest tool, customizable bots called GPTs, to create a story generator for his young daughter. The bot, which he calls Bluey-GPT, begins each session by asking people their name, age, and a bit about their day, then churns out personalized tales starring Bluey and her sister Bingo. “It names her school, the area she lives in, and talks about the fact it’s cold outside,” Warner says. “It makes it more real and engaging”.
The main version of ChatGPT has, since its launch last year, been able to write a children’s story, but GPTs allow parents—or anyone, really—to constrain the topic and start with specific prompts, such as a child’s name. This means anyone can generate personalized stories starring their kid and their favorite character—meaning no one needs to wait for Ludo to drop fresh content.
That said, the stories churned out by AI aren’t anywhere as good as the show itself, and raise legal and ethical concerns. At the moment, OpenAI’s GPTs are only available to those with a Plus or Enterprise account. The company has suggested they may be rolled out to other users, but as custom agents are believed to be one of the concerns that led to the company’s recent board-level drama, and given that researchers have flagged privacy concerns with GPTs, that release could be a ways out. (OpenAI has yet to reply to requests for comment for this story.)
When Warner built his GPT at the beginning of November, he’d made it with the intention of putting it up on the GPT Store that OpenAI had in the works. That never came to pass. Just five days after he advertised Bluey-GPT on Instagram, he got a takedown notice from OpenAI, which disabled public sharing of the GPT. Warner knew using Bluey as the basis for his GPT would be fraught, so he wasn’t surprised. Trademarked names and images are almost always a no-go, but the laws around stories “written” by AI are murky—and Warner’s Bluey bedtime stories are just the beginning.
UNPACKING WHICH LAWS apply isn’t simple: Warner is based in the UK, OpenAI is in the US, and Ludo is in Australia. Fictional characters can be protected by copyright in the UK and the US, but it’s more complicated in Australia, where simply naming a character may not be an infringement without including further elements from the work.
In the UK, the legal protections for characters include names as well as backstory, mannerisms, and expressions, says Xuyang Zhu, a lawyer on the technology, IP, and information team at the firm Taylor Wessing. “That copyright can be infringed if a character is replicated in another context in a way that reproduces enough of these aspects,” Zhu says, adding that rights holders may also take action if their characters are at risk of being damaged reputationally. Alternatively, they may see fan creations as a way to drive engagement—after all, fan fiction is hardly new online, and is generally protected by fair use. These stories are just made by AI.
Still, the stories produced by ChatGPT, and therefore Bluey-GPT, are so generic they have little in common with the actual Bluey characters beyond names. Matthew Sag, a professor of law and AI at Emory University, says character-specific GPTs create more trademark problems than copyright ones. Around the time Warner created his Bluey bot, he also made one using Paw Patrol characters. “It didn’t produce any content that was remotely similar to the children’s cartoon series,” says Sag, who played around with the bot. “But I still don’t think that you should be able to do this without permission from the rights holders. Just like you shouldn’t be able to market a Coca-Cola GPT.”
If generative AI were good enough to truly mimic Bluey, or if prompts were detailed enough to produce a more specific output, it’s possible that a children’s story generated by a GPT could infringe on the copyright of the show’s creators. But Sag argues that responsibility would lie with the person asking for those stories rather than OpenAI, which does filter for requests that may infringe copyright. When I prompted ChatGPT to write a story about Bluey and Bingo from the TV show, the results varied. Some of the time ChatGPT did as asked, but just as often it refused on copyright grounds—once, it offered to write a story starring dogs named “Lulu” and “Bongo” instead.
The Bluey-GPT is still available to Warner and his daughter, but can’t be shared publicly or monetized in the forthcoming GPT Store. However, anyone with access to ChatGPT, free or paid, can still ask the chatbot to write a personalized story starring Bluey and her sister Bingo. It appears in this case the problem wasn’t the AI-generated fan fiction, but the attempt to sell it.
WARNER ISN’T THE first to see financial potential in AI’s ability to generate stories, though story-making apps such as Oscar, Once Upon a Bot, and Bedtimestory.ai use generic characters or those that are in the public domain. Some apps include AI-generated illustrations or the option to have the story read aloud.
Oscar has tight constraints instead of an open prompt like ChatGPT. People can request stories in one of two ways: They can either supply their child’s name, age, and gender and then prompt the bot with an animal, profession, and moral—an astronaut fox that learns perseverance, for example. Or, it can provide a child’s details and then choose a story from an existing (and out of copyright) universe such as Wizard of Oz or Alice in Wonderland. That differs from Bedtimestory.ai and Once Upon a Bot, which have open prompts similar to Warner’s Bluey-GPT, allowing people to ask for stories about Bluey characters. Both delivered such stories when prompted—though neither was remotely near the content that Ludo Studio produces, with the former app misgendering Bluey and the latter treating the characters as pets.
Putting aside Bluey and her sister Bingo, are the customized stories generated by AI even worth reading to your children? Research suggests personalized stories do help grab their attention, says Natalia Kucirkova, a professor in reading and development at The Open University—though it works best if the books are well written and designed to appeal to children and their parents.
Writing a high-quality storybook is no easy task. Olaf Falafel is a comedian, illustrator, and author of Blobfish, a children’s book about a lonely blobfish’s search for a friend, featuring moral lessons on litter, stepping out of your comfort zone, and looking past appearances, as well as plenty of excellent jokes and puns. AI can’t do this. Falafel is sure of it. “I tried to get it to write a joke and it failed—no matter how much I tweaked it and pushed it, it just couldn’t get how a joke works and what makes something funny,” he says.
With or without Bluey and friends, the biggest challenge facing these AI-generated stories is that they’re dull. “I want a twist or something that’s different or unique,” Falafel says, but his attempts to run his own characters through AI kept coming up with the same plot: “There’s a lot of buried treasure.”
ANOTHER CONCERN WITH bots that create stories for kids is making sure what they churn out is actually safe for children. Giving Bedtimestory.ai a prompt that included profanity, fecal matter, and crime led to a story entitled “The Crapulent Bandit’s Heist,” which may offend some but was more goofy than scary. More violent prompts, however, did create content not suitable for the requested age range of 2 to 3 years old, including a story that begins: “Once upon a time, she shot her mom in the face.” The promised murder doesn’t actually happen, though there are gun references.
Cofounder Linus Ekenstam says Bedtimestory.ai relies on OpenAI’s moderation API to ensure content is family-friendly. “It’s not 100 percent watertight, but we are constantly improving, and … it will steer away from this type of input,” he says. “As the available tools become better, and we learn more about how to guide and steer the model, this too will improve over time.”
While traditionally published children’s books generally avoid random forays into violence, they aren’t perfect. Many parents have books they grow to hate after multiple rereadings, while older classics get criticized for outdated values. Roald Dahl’s stories, for example, have been edited to remove negative references in characters’ descriptions including weight, baldness, and skin color. Publishers continue to struggle with diversity and representation. A UK study recently found just 5 percent of children’s books have Black, Asian, or minority ethnic protagonists. AI-generated books could be one way for parents to make characters that represent their own families, though that could also prove problematic given the bias found in many generative AIs.
ChatGPT wasn’t designed to write children’s books. An AI model embedded with knowledge of how to teach reading and trained on only the best children’s books would produce better outputs—in theory at least, says Kucirkova. She points to LitLab, which uses AI to generate “decodable content,” short mini stories centered on phonics lessons, though the platform was created to be used by teachers rather than parents.
IN THE MEANTIME, AI-generated bedtime stories may infringe copyright, lack quality, and require safeguards, but if they get a book-bored child to read, harried parents may not mind. Kucirkova worries that while AI might be a fun tool in families that already read widely, it might become a crutch in others. “For children who might need personalized attention the most, they may be left to navigate various open solutions without the necessary safeguards or care,” she says.