Let’s be honest. There’s a fear hanging in the air — not just in the game dev community, but everywhere. A thick, almost tangible fear, the kind you feel you could spread on bread instead of butter. It seeps through every other thread on X (formerly Twitter), in hushed Discord debates, and in the headlines of the less-than-brilliant press. And this fear has a name — Artificial Intelligence.
And you know what? Talking about it feels absurd to me. Not because I don’t see the problem, but because we’re discussing the wrong thing entirely. There’s nothing inherently wrong with AI — just as there’s nothing wrong with a hammer, electricity, or code itself. The problem is not in the tool. The problem is in the hand that’s afraid to pick it up, and in the eyes that are afraid to see what the tool might reveal.
That’s why I want to take this discussion out of its usual frame and approach it from my own perspective. And yes, I know how it sounds coming from someone who calls themselves a Game Designer.
«Why are you even getting into this topic? Isn’t it more about artists and programmers?»
The answer is painfully simple: Because game design is, first and foremost, the craft of working with systems — with rules, loops, interactions, with that invisible glue that turns a bunch of assets and lines of code into an experience. And when a new, fundamental system appears on the horizon, one that can change the very rules of the game, my professional duty is not to hide from it in fear, but to dismantle it piece by piece and understand how it works. And, more importantly, how to make it work for me.
That’s the reason behind this little rant. This is not yet another attempt to whitewash AI, nor to pelt it with stones — there are already plenty of those articles. This is an attempt to reflect and, perhaps more than that, to simply look at it through the only lens that matters to me: the lens of systemic design and the creator’s responsibility. We won’t be discussing whether AI is good or bad. We’ll be talking about why a panicked rejection of it is not a sign of high moral ground, but of creative stagnation.
System. Wires. Systems.
Let’s forget the phrase “Artificial Intelligence” for a moment. It’s overloaded with Hollywood imagery, fears of Skynet, and pseudo-philosophical debates about “consciousness.” All of that is just noise — informational clutter that prevents us, as designers, from seeing the essence. Let’s look at this phenomenon the way we (game designers) are used to looking at any game: as a system.
Because what is any game, really? It’s a set of rules, mechanics, and feedback loops. And “AI” in this context is no exception. In game development, we’ve been working with its ancestors (algorithms!) for decades — we just called them by different names. The “Director” in Left 4 Dead, which analyzes the players’ state and spawns zombie hordes along their path — that’s systemic “AI.” The procedural level generation in Spelunky or the story generator in RimWorld, which follow strict sets of rules and constraints to create unique yet playable experiences — that’s also systemic “AI.” Procedural generation, algorithms, enemy logic — all of this could, with some stretch, have been called “AI” back then. We never feared these systems, because we understood their purpose and their limits. We knew this wasn’t “creativity,” but a complex — yet predictable — algorithm.
And modern Large Language Models (LLMs) — the very thing causing all this panic — are simply the next, exponentially more complex iteration of the same idea. I’m not saying anything groundbreaking here, merely echoing what’s already been written: AI is not a mind. It’s an incredibly powerful probability-matching engine. It doesn’t “understand” your request. It has analyzed trillions of texts and learned, with astonishing precision, to predict which word should come next so the result looks like it was written by a human.
If we look at it as a system (because metaphors make thinking easier, and I still want to tie this back to game design), we can break it down into familiar mechanics:
- The Prompt (our controller). This is our way of interacting with the system. And just like in any good game, the outcome depends directly on the quality of our input. Asking a vague, sloppy question is like mashing every button on the gamepad and then wondering why the character isn’t doing what we want. Crafting a precise, multi-layered, contextual request — specifying role, goal, and constraints — is like pulling off a complex combo. Responsibility for the result lies not with the system, but with the operator.
- The Model (the rule set). This is essentially the “physics” of our new tool. We don’t need to know how every transistor inside works — just as we don’t have to understand the code of the Havok physics engine to use it. But we do have to know its properties: its strengths (structuring, brainstorming, finding non-obvious connections), its weaknesses (hallucinations, lack of real understanding, factual errors), and its built-in “biases” based on the data it was trained on. Ignoring this is like designing a platformer without understanding how gravity works in your engine.
- The Output (the feedback loop). And here’s the key point: what AI produces is not the final product. It’s the first iteration. It’s a “procedurally generated” draft that we, as designers, must analyze, critique, refine, or discard entirely — keeping only a single useful grain if needed. Working with AI is not delegation — it’s the start of a new game loop: “Prompt → analyze output → refine prompt → analyze again.”
When we break AI down into these parts — “controller,” “rules,” and “feedback” — all the mystical fog disappears. It stops being a threat and becomes what it should be: just another tool in our arsenal, like Unreal Engine, Figma, Notion, and so on. And the value of any tool is defined not by its complexity, but by a clear understanding of which tasks it can solve better, faster, or deeper than anything we’ve had before.
The question finally shifts from:
«What is this?»
To the only one that matters:
«What do I need this for?»
The Most Widespread Tool.
And here we arrive at the most dangerous illusion that has trapped the majority. Because the entry barrier to AI feels deceptively low — open a chat, type a question, get an answer — many have fallen into the fatal delusion that this is all there is to learn. That it’s not a tool to be mastered, but simply a talking box.
That’s nonsense. Absolute nonsense.
No one in their right mind opens Blender, creates the default cube, and decides they’ve mastered 3D modeling. No one launches FL Studio, drops a few notes from a preset library, and calls themselves a composer. We understand that behind the apparent simplicity of these tools lie years of practice, study, workflow building, and the shaping of one’s own style. So why do we deny that same depth to the newest — and potentially the most powerful — tool of them all?
The immediate answer: because this depth looks like nothing we’ve seen before. AI is not primitive. On the contrary — beneath its simple interface lies a complexity of a kind we, as mass users, have never encountered. This is the paradox: staggering complexity wrapped in the simplest possible interface, creating the illusion of complete understanding. In just a few years, we’ve evolved from simple chatbots to “agents” — systems that can independently use other tools to solve tasks. Yet most people still don’t understand how to use them, and continue to approach AI with requests like “write a birthday greeting for my friend.”
And even then, for anything more complex than that, AI will not give you a finished result. Never. You will still need to research and verify on your own. Texts — you’ll have to fully rewrite them to give them style and meaning. Images — you’ll spend a long, painstaking time refining them to get the right composition and emotion. Code — you’ll have to refactor it, because it’s often inefficient and poorly scalable.
So what, then, is its primary purpose, if it doesn’t do the final work for us?
People try to make it an executor, but its main function is to take on all the “dirty” cognitive work that slows down the creative process. Routine information gathering, structuring messy notes, sifting through hundreds of dull options to find the one worth keeping — AI does all this faster and more efficiently. It frees up our own “processor” for the most important part: generating unique ideas, making strategic decisions, and — most importantly — exercising taste. Mastery of this tool means knowing how to delegate every bit of routine to the machine, so you can keep for yourself everything that is art. It is the art of designing a dialogue that expands the boundaries of your own thinking.
«First we build the tools, then the tools build us»
(с) Marshall McLuhan
Whenever a powerful new tool enters any ecosystem — be it an online game, a social network, or now AI — we see the same predictable pattern of behavior. It’s the unspoken “1-9-90” rule: in any community, 1% of people create content, 9% comment on and modify it, and 90% simply consume it. Simplified for AI, the breakdown looks like this:
- 90% — Consumers. These are the people who see AI as the final product. Their interaction model is simple: Prompt = Result. They treat it like a search engine or a vending machine. Their disappointment is genuine, because the tool fails to meet their expectations — it doesn’t produce a masterpiece on demand. They’re not bad people; they simply use the tool without understanding its true nature. From that ignorance, fear can grow — because if you don’t know how it works, how can you know how it might be used?
- 9% — Operators. This group has mastered the basic loop. They know how to generate content in large volumes and they do it. Their focus is on quantity and speed, not on quality or meaning. Not out of malice — that’s just their strategy. Their activity creates the “informational noise” and tons of repetitive content that frighten the first group.
- 1% — Designers. The only ones who have understood the core truth: AI is not a source of final answers, but a testing ground for questions. Their model of interaction is dialogue, iteration, experiment. They don’t ask “do it for me,” they say “help me do it.” They use AI to generate hundreds of variations, see non-obvious connections, and test their boldest ideas — only to then apply their own will, taste, and experience to extract the single grain of gold and refine it to perfection. They do not delegate creativity. They design it.
The truth is, we (not “me and my generation” — I wasn’t even born then, but rather “we” as people) have been here before. With the internet. It was feared too, predicted to destroy social bonds and replace real life. And where did we end up? For 90% of society, it remained a consumption tool — an endless feed of social networks and entertainment. For 9%, it became a digital wasteland to exploit — meaningless videos, dead-end websites, spam, clickbait, and scams. And for that 1% (which, over time, grows but never surpasses 9%) of truly curious people, it became vital — a platform on which they built new economies, created new art forms, and gained access to the sum of human knowledge to make things once thought impossible.
So it would seem we have our answer: simply strive to be in that one percent. To seek not answers, but better questions. But what happens when that very 1% — the few who have understood the rules of the game — are forced to operate in the eye of the storm?
Because that’s exactly where we are right now. This is a real gold rush, where everyone is scrambling to stake their claim. The technology is being shoved into every possible corner — from your smart fridge to your text editor — often without the slightest understanding of what real problem it solves. The line between meaningful integration and mindless marketing hype is vanishing before our eyes.
And in this chaos, the most unfair and counterproductive reaction emerges — blanket condemnation. When LITERALLY EVERYONE and EVERYTHING ends up in the same pot, the torch-bearing crowd stops caring who’s who. It just wants to burn everything connected to the new, the unfamiliar, and the frightening.
And that’s where we come to the hard part.
The Coin’s Side Edge.
…
I could skip this part. Honestly. Having laid out all the systemic arguments, I could jump straight to the conclusion. But that’s the catch. My goal is not to convert anyone, nor to drag people from one camp to another. That would be foolish. And I’m not about to wag my tail trying to please everyone. So that means I have to take a side, right?
I completely understand everyone who sees AI as a problem — who looks at the Steam front page with frustration and sees the flood of shameless junk, cobbled together by “vibe coders” from generated assets and soulless images. I share that pain, because it strikes at what matters most — the value of invested work and talent.
But I also understand those for whom ChatGPT has become the new Google. Those who ask it what movie to watch tonight, to fix a sentence, to solve a problem for them, to help finish a dry work email to their boss. They’re not committing evil. They’re simply using a convenient new tool that saves them five minutes of their life.
Both sides are part of the same picture.
Everything is understood in comparison. That wonderful, thrilling feeling from your very first prompt will never come back — it’s gone. Just like the magic our parents or older colleagues once felt from their first search query in a browser. It’s become mundane.
And if there’s any side worth taking today, I’ll take the side of those who don’t try to deny the obvious. We can dig endlessly into the details — and this whole rant is about them — but the bigger picture is simple: AI is not going to disappear from our lives. It will become as much of a background presence as electricity or the internet.
And here, on the ashes of old arguments, each of us faces the real choice. Not what to do with AI, but who to be in the age of AI. Whether to see this era as the end of something old and familiar, or as a chaotic — but full of incredible potential — beginning of something entirely new.
Conclusion: Intent > Generation.
«We were born too late to explore the Earth, and too early to explore space»
The quote above is perhaps the most convenient self-deception in history. People simply don’t look at their own feet. We are not living on the sidelines — we are in the epicenter of a creative explosion, in the most avant-garde stretch of history, a moment when the majority of ALL the art we know and love is being created. And so, the art is a discovery of our self.
And right in this very moment, we are handed a new, untamed, fundamental tool. And what do we hear? “It will kill the soul”, “It’s not real”.
Denying AI today is not just conservatism. It’s as if the first filmmakers had rejected editing because it “kills the soul of a single theatrical take”. As if musicians had refused electric guitars because they “distort the pure sound of acoustics”. Yes, AI can generate an endless stream of brain-rot animals and other noise. But a hammer can drive a nail, or it can shatter a sculptor’s masterpiece. The tool does not define the outcome — the creator’s intent does. The difference between meaningless generation and a work of art is the will, taste, and vision of the person behind the process.
The fear many have is that AI will make creativity unnecessary. But the reality is the opposite. When generating “anything at all” becomes trivial, the value of something deliberate, refined, and deeply felt skyrockets.
Now that means only one thing — there are no more excuses. You can’t just sit this out. Don’t fear that AI will kill creativity. Fear a world where the rules for using it — and the ethics that govern it — are written by people who understand nothing about creativity, while you stood on the sidelines. In the past, you could say — “I don’t have the means to bring my idea to life”. Now the only honest answer is — “I don’t have an idea worth bringing to life”.
When magic became available to everyone, the true magic turned out to be having a dream — and the will to make it real.
And see you where the secrets are hidden → t.me/slepokNTe 👀