When it came out, generative AI was marketed as boundless creative potential, however users find themselves stuck in a loop of recycled aesthetics. Social and AI platforms claim to serve individual expression, yet the images, music and text produced by these tools increasingly look and sound the same. Through subtle, systemic mechanisms, like recommendation loops, optimized prompts and popularity-driven training sets in algorithms, this article presents a divide between democratization of artistry tools and standardized content.
The Comfort of Repetition
Human preference will always choose the familiar. Psychologist Robert Zajonc described this in 1968 as the “mere exposure effect”: people tend to develop a preference for things merely because they are exposed to them repeatedly. This cognitive tendency is foundational to how recommender systems, algorithms and AI models function on social media platforms. The result is a cultural landscape in which familiarity is becoming comforting and losing individuality.
As AI art tools gain traction, their users often report a curious phenomenon: the more they optimize their prompts and models, the more their work resembles everyone else’s. Prompt engineering becomes less about creativity and more about reverse-engineering what the system was trained to favour. Unfortunately, this is not a glitch in the system, it is the system.
Models such as Midjourney, DALL·E, and Stable Diffusion are trained on datasets harvested from what has already proven popular, accessible and high-quality in the public domain. As Emily Bender and colleagues warned in their 2021 paper on “stochastic parrots,” these models reproduce surface-level patterns from their training data without genuine understanding. Their outputs reflect the statistical average of their inputs, not imaginative synthesis. The result is a soup of seemingly cohesive structure. And how do we choose which structure has the most cohesive sense to us?
When users select their favourite outputs from a batch of AI generations, they often choose the ones that feel most aesthetically “right” or polished. But what feels “right” is increasingly shaped by what’s already been seen. The neural pathways of both machine and human cognition narrow in tandem.
Algorithmic Aesthetics
Platforms like Instagram, Pinterest, Spotify, and TikTok further accelerate this narrowing by feeding users content they are most likely to engage with. According to Pariser’s 2011 concept of the “filter bubble,” personalization algorithms insulate users from exposure to diverse viewpoints or styles. Over time, this cultivates uniformity in taste under the guise of tailored content. Have you watched one video of a TikTok dance and the algorithm continued showing people dancing to it even if you have no intention of doing the same?
Empirical support for this phenomenon comes from Zhou et al. (2010), who demonstrated that recommendation systems inherently suppress diversity by reinforcing prior popularity. Anderson et al. (2020) from University of Toronto, extended this observation in their study on Spotify, showing that users engaging primarily through algorithmic recommendations explore less musical diversity over time. In both cases, personalization subtly nudges individuals toward aesthetic conformity.
Visual platforms are particularly susceptible. A brief scroll through any AI art community will reveal a glut of ultra-saturated, hyper-detailed images resembling digital paintings or cinematic stills. The dominant styles are cyberpunk cities, ethereal portraits, surreal dreamscapes, characters presented in unrealistic situations, and well, a lot of cats.
Prompt Engineering Doesn’t Require a Degree
In the context of generative AI, prompt engineering was initially a new job title on LinkedIn as the key to unlocking individuality. The reality is more constrained. Prompts are now like prescriptions: users share “magic words” that increase fidelity, coherence or visual appeal. Sure, it is a new language of efficiency, one that discourages experimentation and encourages conformity.
The more people learn how to “talk to the model,” the more their outputs converge. Midjourney users like me, for example, often rely on stock modifiers like “8k resolution,” “cinematic lighting,” or worse, “trending on ArtStation”, which are terms that disproportionately boost visual quality, but also homogenize results.
The problem is systemic, not individual. Caroline Sinders’ “Feminist Data Set” project critiques precisely this dynamic: when the training data and interaction paradigms are biased toward mainstream beauty or visual coherence, creative deviation is filtered out. Users may believe they are expressing themselves, but the sandbox was pre-shaped before they arrived.
The Path Forward: What can Artists and Algorithms do?
If generative AI tools are to help rather than dilute personal taste, their design must change. This involves more than opening model weights or adjusting training data. It means rethinking the values embedded in interface, curation and feedback. Surprise, discomfort, and ambiguity must be seen as features, not bugs.
Projects like Lucidorium, which I’ve developed using generative AI, are my attempt to tackle this issue by deliberately introducing narrative ambiguity, aesthetic imperfection and visual discomfort. Rather than erasing the dark, the strange, or the unstable, such projects embrace them. Risk, illness, fragmentation are parts of reality. If we put the rose-tinted glasses off, how much content except propaganda or AI slop have you seen in the latest times on platforms?
Gillespie (2014) reminds us that algorithms are never neutral; they carry trained assumptions about relevance, beauty and engagement. The systems that define what is visible online also define what becomes desirable. To reclaim personal taste, users must ask for systems that challenge them.
Researchers at the MIT Media Lab have proposed incorporating serendipity into recommender systems – offering deliberately unfamiliar suggestions to expand user horizons. While these ideas remain under-implemented, they point toward an alternative trajectory: one in which algorithms are partners in exploration rather than enablers of repetition.
Personal taste is not a static identity but a dynamic process. It is shaped by exposure, experimentation and friction. If AI systems are to support that process and already have cognitive effects, they must stop protecting us from discomfort.
The danger is not that AI makes art or aesthetically good images, but that it teaches us to prefer the average pushed by algorithms. Taste, after all, is not what we like. It is how we learn to like differently.