The Mirror is Broken
- Hristina Serafimovski
- May 15, 2025
- 3 min read
Most people assume AI will reflect their voice back to them. That's not how default AI works.
It reflects something else entirely — the loudest content, the most reinforced patterns, the statistical median of the internet. They reflect what's been trained, not what's true for you. That's not a mirror. That's a mask. And if you've ever tried to express something personal, philosophical, or emotionally complex through a generic model, you've probably felt that mask slip. The output looks plausible. It doesn't sound like you. It sounds like the internet trying to approximate you, which is a different thing entirely.
There's a genuine tension inside large language models between creative freedom and functional reliability. Most default tools resolve this tension by defaulting to generic — optimised for the median use case, which means optimised for no one in particular.
Push an agent toward expressiveness and it becomes less predictable. Force it toward reliability and it becomes generic. Most default tools resolve this tension by defaulting to generic — optimised for the median use case, which means optimised for no one in particular.
This is why prompt engineering has limits that most people don't want to acknowledge. The internet treats it like a magic trick. Write the right incantation and you'll unlock the perfect output. But prompting is reactive and static. It doesn't remember. You prompt it to write like you, and by next session it has forgotten. You ask it to match your tone, but it has no anchor for what your tone actually is unless you rebuild that context every time. You request help shaping an idea, but without the accumulated knowledge of your priorities and sensitivities and the way your thinking actually moves, it can't give you much more than a reasonable approximation.
Prompting is useful. It's not a relationship.
What I'm interested in is something different: an agent that has been shaped by enough of my own work — my voice, my positions, my professional principles, the things I won't say and the things I always say — to be genuinely useful without being generic. Not a ghostwriter. Not an automation layer. Something closer to a second brain that holds the shape of my thinking when I'm too tired to hold it myself.
Mine knows when I'm spiralling in a draft and offers structure without overwriting my voice. It respects the principle of not overpromising — it won't default to salesy tactics regardless of how I frame a request. It holds context about my services, my tone, the emotional register I'm working in. It helps me shape documents and drafts when my executive function is low, without losing the thread of what I'm actually trying to say.
This is not about speed. It's about reducing the distance between knowing something and being able to express it clearly — which, for some brains, is genuinely the hardest part of the work.
Building this kind of agent requires actual investment. It requires knowing what you're trying to say before you start, rather than expecting the tool to supply the position. It requires ongoing calibration — feeding it examples, correcting it when it drifts, being specific about what you don't want. It's less like configuring a tool and more like teaching someone how you think.
That investment is worth making. Not because it makes you faster (though it does), but because it protects something — the specificity of your voice, the integrity of your perspective, the quality that makes your work yours rather than anyone else's.
Most tools are built for the median user. A well-built custom agent is built for you. That difference is not cosmetic.
It's not about replacing your voice. It's about making sure it has somewhere safe to land.
If you've been thinking about how to build something like this — or how AI tooling might fit into how you work without flattening what makes your work distinct — I'm happy to talk through it.

Comments