OpenAI has unveiled a new AI model capable of generating creative writing that even its CEO, Sam Altman, found striking. This yet-to-be-released model marks a significant advancement in AI-generated content, raising both excitement and concerns within the creative industries. As someone deeply involved in both AI-driven content creation and the ongoing discourse on copyright, I find this development particularly relevant to upcoming challenges.
As someone in the creative industry who has spent years refining the creative processes, the idea of an AI that can replicate this process, that can generate stories with an understanding of grief, loss, and human emotion, is both intriguing and unsettling. If AI-generated writing is now “striking” enough to impress its own developers, then we must ask: whose words is it using to learn?
What do we know about the new model?
Altman’s statement on social media stated that this is the first time he has been truly impressed by AI-generated writing. He shared an example where the model was tasked with crafting a metafictional literary short story about AI and grief. The result was a piece that demonstrated an awareness of its own artificial nature, stating:
“Before we go any further, I should admit this comes with instructions: be metafictional, be literary, be about AI and grief, and above all, be original. Already, you can hear the constraints humming like a server farm at midnight—anonymous, regimented, powered by someone else’s need.”
As of now, no release date has been announced. The AI later refers to itself as:
“A democracy of ghosts.”
Given my experience in AI-generated art and writing, I cannot decide whether the quote above is entertaining or slightly unsettling, but I recognize the potential of such a model to assist creatives in refining narratives, generating inspiration, or even co-authoring projects. The possibilities for storytelling, world-building, and even interactive fiction are immense. However, as thrilling as this sounds, it also raises concerns about where the AI derives its ability to mimic human writing so convincingly.
The Copyright Debate: Training on Protected Works
OpenAI has previously acknowledged that its models are trained on vast datasets that include copyrighted materials. The company admitted in a House of Lords submission that “it would be impossible to train today’s leading AI models without using copyrighted materials.” This aligns with the ongoing legal disputes between AI companies and content creators.
The New York Times, alongside authors such as Ta-Nehisi Coates and Sarah Silverman, has sued OpenAI and Meta for allegedly using their copyrighted works without permission. In the UK, the government has proposed allowing AI companies to train models on copyrighted material without prior consent, a move strongly opposed by publishers and writers.
In a previous article, we discussed how such practices create a conflict between technological advancement and intellectual property rights. The development of AI models like OpenAI’s latest offering makes this issue even more pressing. Read more about the effects of copyright on the creative and regulatory industry. If an AI can generate a story reminiscent of published literary works, does that mean it has absorbed and repurposed those texts? And if so, where is the boundary between learning and infringement?
Works should be are original, not derivative of existing ones. The same standard should apply to AI-generated writing. If this new model is producing text that evokes strong emotions, then it is almost certainly doing so by learning from human-written literature—some of which may have been used without consent.
The Economic and Ethical Backlash
The potential for AI to replace human creativity is no longer a hypothetical concern. While OpenAI and similar companies present AI as a tool to assist creators, there is an undeniable risk that businesses, publishers, and media outlets will prioritize AI-generated content over human writers. After all, AI does not demand salaries, royalties, or intellectual property rights. It can produce unlimited content at a fraction of the cost, which, from a commercial perspective, makes it an attractive alternative to hiring professional writers.
For creative professionals—novelists, poets, journalists, screenwriters, and content creators—this shift could have serious financial consequences. If publishers and studios begin to rely on AI to generate drafts, or even fully formed narratives, will they still have room for human authors? And if AI-generated content is based on works written by real authors, should those authors be compensated for their indirect contributions?
This issue extends beyond financial impact. There is a question of originality and artistic integrity. AI does not create—it recombines, mimics, and generates content based on pre-existing material. Even if it produces something that appears “new,” it is still a derivative product of the texts it was trained on. The result is a situation where AI outputs feel authentic, but their authenticity is based on a foundation of uncredited, unlicensed works.
We have already seen backlash from the artistic community against AI-generated art that is trained on existing styles without permission. The same concern applies to AI-generated writing. As a writer who uses AI responsibly, I believe that innovation should not come at the cost of erasing the value of human authorship. AI should assist, not replace.

The Legal Future of AI-Generated Content
If OpenAI’s latest model is as advanced as Altman suggests, then the legal landscape surrounding AI-generated writing is about to become even more complex. Courts will need to determine whether AI-generated texts can be copyrighted, and if so, who owns them. Currently, copyright law does not grant authorship rights to AI-generated works, which means any AI-generated story, article, or book could be considered public domain unless human intervention is demonstrated.
At the same time, the debate over AI training methods is reaching a critical point. If AI companies continue to use copyrighted books, articles, and scripts to train their models without consent, legal precedents will likely be established that could either enforce stricter licensing requirements or validate AI training as fair use. The outcome of ongoing lawsuits against OpenAI and other AI companies will shape the future of AI-driven content creation.
Dan Conway, CEO of the UK Publishers Association, summarized the issue perfectly:
“This new example from OpenAI is further proof that these models are training on copyright-protected literary content. Make it fair, Sam.”
AI as a Partner, Not a Replacement
AI has the potential to be a transformative tool for writers, offering new ways to brainstorm, refine, and experiment with storytelling. However, this potential should not come at the expense of those who have spent years perfecting their craft. OpenAI’s latest advancement highlights the urgent need for clearer regulations and ethical guidelines to ensure that AI development does not exploit human creativity without acknowledgment or compensation.
I, for one, am eager to test OpenAI’s new model. But I do so with a keen awareness that the way we handle AI and copyright today will determine the future of creative industries for years to come. AI should be a collaborator, not a silent plagiarist—an assistant, not a replacement.