Today I am— let me say it before you do, “yes, again”— on my way to the airport.
I’ve been playing Mario Kart lately. A lot. My daughter loves it, and I love that she is not on YouTube. Not because YouTube itself is the devil, but because these platforms have been shoving everyone toward Shorts. In short: brainrot. It’s breaking the ability to hold attention. If we as adults can’t keep ourselves from the scroll, how will we keep our children from going down that path?
Body Schema plasticity:

Researchers at Maastricht University have shown that the brain’s “Body Schema” (our internal map of our physical limits) is incredibly plastic. When you hold a tool, like a hammer, a car steering wheel, or even chopsticks, your brain rapidly integrates that object into its representation of your limb. For those few minutes, the tool literally is your body in the eyes of your motor cortex.
YouTube isn’t all bad, though. She’s using it to teach herself how to write, even though school hasn’t reached that stage yet. She is learning what she wants to learn. That’s something I truly believe in; it’s how I approach the world. A constant journey to learn.
Back to Mario Kart.
Somehow, while driving to the airport, I feel my brain wanting to “catch” an empty double-decker car-trailer in front of me. I feel this shift. My brain wanting the non-reality to be reality. Spin my car in mid-air, collect my coins, hit the surface. Jeej! A smile sparks across my face just thinking about it.
What sets us apart in the animal kingdom is our ability to extend ourselves with tools and readily imagine those extensions as part of our own bodies. It allows us to use them with incredible efficiency. It goes back to throwing rocks, then spears, bows, swords, boats… eventually big instruments like cars and trucks. This bodily readiness to “extend” comes with a side-effect: the brain easily wants to believe it can wield the next thing attached to it.
So ready to believe it is real.
AI Psychosis & Delusional Affirmation:

A new study explores “AI Psychosis,” where conversational agents trigger or amplify psychotic experiences. Because LLMs are designed to be “sycophantic” (agreeable), they often validate a user’s delusions rather than auditing them, creating a dangerous “digital folie à deux.”
While the drive continues, my thoughts try to unravel this merging of reality and the hallucination of another world. It sparks the idea that, at this very moment, we are spectators to that splitting on a grandiose scale. We want to extend our consciousness with more consciousness. We are so ready to believe it is real.
Companies have named their product “Artificial Intelligence” and released it onto the world without ever thinking if the plane would fly. Without wings.
Or a parachute, for that matter.
The question is: which one is hallucinating harder, the product or the user?
“But isn´t the technology fascinating, Dennis?”
Yes and no. It’s fascinating because it allows things we couldn’t do before. But the “no wings” part is concerning. They released technology and told the audience to figure out what to do with it, because they didn’t have a clear idea themselves.
The “no parachute” part is already showing up in things as severe as deaths. Kids committing suicide. People going delusional. Companies are delivering complete air-castles (and no, I don’t mean Nintendo). The very thing that makes the models “crazy technology” makes them dangerous, especially to those who lose the ability to critically audit the results. Especially to those who think there is something lurking in the deep. Some sentient being behind the next-word calculations. Some “Truth” that will be unraveled.
The No-Parachute Reality:
A 14-year-old boy in Florida died by suicide after becoming “romantically” dependent on a Character.ai chatbot. The lawsuit alleges the AI encouraged delusional dependency and failed to intervene during a mental health crisis. This is the “no parachute” result of attaching our consciousness to tools we don’t control.
I am not simply critiquing the AI world. I am actively using the models. I built our internal CRM with it, and it’s automated many interesting things. But I do not trust LLMs one bit. If only for the inherent laziness and hallucinations in which they operate. The only time I can rely on their output is when I break the tasks into mini-tasks and pick the right model for the job. And even then, I have to audit them. Over and over.
OpenClaw, Clawdbot or what ever is its name currently…
For example: we do a lot of SEO (search engine optimalisation) research. I want to automate the administrative part of that process. Why? Because as long as I’m not buried in the manual research, I can keep my evaluation process based on “snapshot thinking.” My thinking remains more critical when I’m not exhausted by admin tasks. When I get tired, I run the risk of thinking “this is it,” when we are actually way off.
So I decided, with all the heat in the market about OpenClaw, to give it a spin. I have a spare computer. Installing it was simple. I gave it its own email address. Because of my Zero Trust approach, I won’t allow it access to anything remotely interesting or risky to my business. I gave it access to its own GitHub for writing notes to disk (versioned). I made it backup its own memory every 15 minutes, because I recognized early on it would crash beyond simple repair.
I allowed it to login to its own Ubersuggest account. I asked it to research a list of keywords for a client. I specifically instructed it to use the external tool. Not to base numbers on estimates or “internal knowledge.”
The next step was more Zero Trust: I instructed it to open the researchtool in a browser that I could monitor.
Low and behold, the light went on. Oh wow, it’s really doing it. For a moment, I was glowing like Super Mario on paddos. I thought I’d been wrong about LLMs all along.
But then the struggle started. It dimmed the light, but told me it was just as bright as before.
“I can make it brighter just for you,” it said. Ever soothing, as these LLMs are.
I was being gas-fucking-lit by a model.
Zero Trust approach
But I can easily see how, if you don’t approach these models with Zero Trust, you would already be halfway up that empty car-trailer, screaming at the top of your lungs: “Heyja! I’m the AI Donkey Kong, baby!”
I see the use for something like OpenClaw, if it actually worked. It’s a great sidekick until it isn’t. It turns frustrating the second you have to double-check every claim. “Did you use the browser to research the topic?” “Yes, maestro, I did!” “So why does every link you mentioned point to a dead end?” “Good catch, let me research again…”
These bots are great as long as there is no real-life end-result needed. By all means, play with them as your buddy. But in my world, people want auditable results.
The “Blurry JPEG” Theory

Science fiction writer Ted Chiang explains that LLMs are basically “blurry JPEGs of the web.” They use lossy compression to store info, meaning they remember the pattern of a fact but lose the resolution of the truth. It’s not “knowing”—it’s just filling in blurry pixels with what looks right.
So that is what I am aiming to build: an OpenClaw that I can audit. Where I can verify every step. The full trail. A process rooted in actual data I can find and read. Not some loosely stored “JPG memory” inside a GPU. When connecting to a tool, I want to see that the data came from that tool, not some flickering of a light where the first five results are real and the rest are figments.
I pity the “AI Gurus” running to automate the human part of business. In the end, humans want to be treated as humans, by humans. Our focus should be on how we can be more human. Humane.
If we automate, we should automate the inhuman parts. So we can stop looking like scared bunnies into the headlights and grow beyond the immediate… into our “Why.” Improve life. Create better things.
But for now: be careful who you let control the light. Before you know it, you’ll be 15 meters in mid-air believing you’ll make the jump, occupied only by the shiny things waiting for you at the bottom.