I work with AI systems, however, my work is not about producing AI content. It focuses on transparency in AI development, deployment, regulation and on how these systems affect real people in practice.

A year ago, I was genuinely intimidated by algorithms. Not just how they worked, but how easily they shape visibility and narratives. Writing openly felt risky. Saying the wrong thing could mean being buried, shadowbanned, or having all those hours of work quietly made invisible. A feud between me as a creator and the anime streaming platform Crunchyroll, pushed a few buttons, on how entire communities get sidelined once AI, IP and profit collide.

Somewhere along the way, that fear gave way to something else. I started out covering lawsuits and copyright issues. Safer territory. Public records, defined boundaries. But as I kept digging, I found the courage to develop my own style of independent research and move into harder topics, even knowing it might cost reach, comfort or visibility. On occasion, it already did.

Some of that research was genuinely hard to stomach. There were moments where reading through documents, cases or real product behavior made me physically sick. That was a line I didn’t expect to cross, but once you see certain things, pretending they don’t exist isn’t an option anymore.

Despite that, I believe the problem has never been the existence of AI as a tool. My work isn’t anti-AI. AI, used properly, can help research, accessibility, creativity, medicine and education. What I don’t believe in is deploying powerful systems without transparency, literacy or safeguards, especially when they are designed to capitalize on human insecurities and psychology for profit, while calling it innovation.

Most people don’t have the time, access or institutional backing to read regulatory documents, research papers, lawsuits or technical disclosures. That vacuum gets filled with hype, fear or corporate talking points.

Disclaimer: Some topics covered involve sensitive subjects. The research is presented for educational and transparency purposes, but reader discretion is advised.

What did we learn in 2025?

My goal in 2025 has been to make those AI systems legible. To document what is happening, where and why it matters, without assuming bad faith, but without excusing negligence either.

AI did not create loneliness or harmful impulses. Those come from us. The issue is that once these systems were released at scale without meaningful safeguards and with profit priority, their misuse became inevitable. What we need now is education that keeps pace with the technology, and regulation that clearly separates fantasy from exploitation.

For 2026, I hope the risks I suggested as potential problems in my research don’t keep resurfacing months later as “unexpected” tabloid headlines, after the damage is already done.

Below, I’ve organised the articles from this year into thematic sections. These are patterns that emerged repeatedly across different investigations. Each topic links to the full research for anyone who wants to read further and form their own conclusions.

All full articles are linked on the webpage, and the complete research index is also available on ORCID.

I’m an indepedent reseracher, not backed up by anyone, working on all materials, from writing to digital marketing myself. Your attention and support mean immensely. Read what interests you, take your time, draw your own conclusions and tell me what do you think!