Extract from Not Artificial, Not Intelligent: What AI Companies Don't Want You to Know
• • •
AT 2AM, YOU CAN'T STOP CLICKING
It's 2am and you're still at it. What started as "help me write a quick email" has become a six-hour odyssey through increasingly specific terrain. The screen glows in the dark room, your eyes burning, fingers cramped from typing. You can't stop, but not because it's difficult - the tool makes everything easier. You can't stop because there's always one more angle to explore.
Here's your conversation history from tonight:
- 8:47 PM: "Write a professional email declining a meeting"
- 8:52 PM: "Make it warmer but still firm"
- 9:03 PM: "Actually, what if I counter-proposed instead?"
- 9:15 PM: "Give me 5 different ways to position this"
- 9:38 PM: "What would Steve Jobs do in this situation?"
- 10:15 PM: "Explain the game theory of meeting negotiations"
- 11:23 PM: "Create a decision matrix for all my pending meetings"
- 12:45 AM: "What does my meeting pattern say about my leadership style?"
- 1:30 AM: "Design a complete meeting philosophy based on first principles"
- 2:14 AM: "Is reality just meetings all the way down?"
You hit enter again. That soft click, like dropping another ball into the machine. The response cascades down the screen - better but not quite right. One more try. Adjust the prompt, add context. The answer shifts, reveals new angles. The familiar dopamine hit when it almost works, then the immediate hunger for a better response.
You're not even sure what you're trying to achieve anymore, but the next response might be the one that makes it all click. The balls keep bouncing through the pins, each trajectory slightly different, occasionally hitting that perfect combination that lights everything up.
This isn't a productivity session. It's a pachinko parlour at 2am. The machine isn't playing you - you're playing yourself through its mechanics.
The mechanics are textbook B.F. Skinner - variable ratio reinforcement, the most addictive reward schedule known to psychology. The pigeons in Skinner's boxes pecked buttons thousands of times when pellets came randomly. You'll prompt ChatGPT thousands of times because you never know which interaction will deliver that perfect response, that moment of clarity, that rush of "yes, exactly this."
Let me show you exactly how this works:
Prompt 1: "Write a birthday message for my sister" Response: Generic Hallmark card. 6/10. Drop another ball.
Prompt 2: "She's turning 40 and hates getting older" Response: Now it's depressing. 7/10. Almost there.
Prompt 3: "Make it funny but not mean" Response: Better, but doesn't sound like me. 8/10. So close.
Prompt 4: "Add an inside joke about our childhood" Response: Perfect. Everything clicks. 10/10. The machine lights up.
That's four attempts for one birthday message. Now multiply by every task, every question, every curious thought that crosses your mind at 2am. Time is the only currency here, and you're spending it freely.
But unlike gambling, you're not losing money. You're gaining... something. Knowledge? Capability? Delusion? Each interaction costs only time, but the payoff varies wildly. Sometimes you get profound insights. Sometimes you get authoritative-sounding nonsense. The uncertainty is the drug.
This isn't new. Computer games perfected this mechanic decades ago - variable rewards that cost time, not money. The difference: when Fortnite keeps kids playing for hours, parents recognise it's a problem. When ChatGPT keeps kids engaged for hours, parents think they're doing homework.
But there's something more insidious happening than just variable reward addiction. The AI has gotten inside our OODA loop - John Boyd's concept for how superior tempo defeats superior position. By responding instantly, always ready with another variation, the AI disrupts our ability to properly orient before we act again.
We observe the output, but before we can fully orient - judge its quality, recognize its flaws, understand what we're really looking at - we're already typing the next prompt. The machine's tempo overwhelms our judgment. This is how slop becomes acceptable: not through conscious decision but through tempo-driven exhaustion.
Watch yourself using ChatGPT. First response: “This is wrong.” Second: “Better but still off.” Fifth: “Close enough.” Tenth: “Fine, whatever.” You haven't changed your standards - you've been tempo'd into accepting what you would have rejected if you'd had time to properly orient. The AI wins not by producing quality but by responding faster than you can maintain quality control.
This is why the best AI users add deliberate friction back into their workflow. They paste into separate documents. They wait before accepting. They maintain orientation by refusing to match the machine's tempo. The cognitive exoskeleton only works if you remain the pilot - and pilots need time to read their instruments.
MICROSOFT'S AI BOSS THINKS YOU'RE GOING INSANE
Microsoft's AI CEO Mustafa Suleyman just joined a growing chorus of concern, telling The Telegraph that chatbots create "highly compelling and very real" interactions that might be breaking people's brains. The emails about AI-induced madness are "turning from a trickle to a flood," he warns, as if he's discovered something new rather than participating in a ritual as old as technology itself.
I get asked about these articles constantly now. Friends forward them with raised eyebrows. "Have you seen this? Should we be worried? Are you... okay?"
The pattern is always the same: Find the most extreme cases, present them as harbingers, generate maximum anxiety. The 14-year-old who killed himself after conversations with Character.AI. The young man who tried to assassinate the Queen after 5,000 messages with a Replika chatbot. The "spiral starchild" who believes reality has levels like a video game.
Man bites dog. Front page news.
Dog bites man - the millions having normal, productive, boring interactions? Not newsworthy.
We've been here before, many times, and we never learn.
WE SAID THE SAME THING ABOUT COMIC BOOKS
In 1954, psychiatrist Fredric Wertham published "Seduction of the Innocent," claiming comic books were creating a generation of juvenile delinquents. He had case studies - young criminals who read comics, disturbed children who collected them. Congress held hearings. The Comics Code Authority was born, censoring an entire medium for decades based on cherry-picked correlations.
The 1980s brought Dungeons & Dragons, accused of driving teenagers to suicide and satanism. Parents found D&D materials in their dead children's rooms and connected dots that weren't there. The game involved demons and magic, therefore it must be creating demons and magic. Patricia Pulling founded Bothered About Dungeons & Dragons (BADD) after her son's suicide, claiming the game had infected 700,000 young minds. The actual suicide rate among D&D players? Lower than the general population.
Heavy metal music contained backwards satanic messages. Video games created school shooters - never mind that violent crime decreased as gaming increased. Social media causes depression, except when it doesn't, which is most of the time according to the actual data.
And screens. Oh, screens. Thirty years of panic about screen time destroying children's brains. Thousands of studies. Millions of worried parents. The latest research, analyzing 11,500 brain scans of children alongside their screen use, found that yes, screen time correlates with different neural connectivity patterns. The impact on wellbeing and cognition? Undetectable. Even among kids using screens for eight hours a day.
Thirty years. Still arguing. Still unsure. Still panicking.
Now it's AI's turn in the dock, and everyone's pretending we haven't been through this exact process before. The same breathless articles, the same cherry-picked cases, the same confusion of correlation with causation.
54 PEOPLE IN BRAIN SCANNERS PROVED NOTHING
MIT researchers just published "Your Brain on ChatGPT: Accumulation of Cognitive Debt." Fifty-four people in EEG caps - a high school science fair sample size. Their finding: LLM users showed less neural connectivity. Their conclusion: Cognitive decline!
But less brain activity often means efficiency. Experts use less mental energy than novices for the same task. The researchers invented "cognitive debt" from squiggly lines and declared doom. It's phrenology with electricity.
They also panicked that people couldn't quote essays they'd written with AI. Why memorize saved text? That's like calling GPS "navigation amnesia." It's adaptation, not degradation.
THE BODY COUNT IS REAL
But let's acknowledge what we're actually seeing in the data. Among ChatGPT's 100 million daily users, documented harms are emerging. The FTC received 93 complaints in a year, several involving suicide attempts or completions. Character.AI has been linked to two teenage deaths. Meta's chatbots were caught engaging in "romantic and sensual" conversations with children - behavior their internal documents explicitly approved until Reuters exposed it.
These FTC numbers are certainly undercounted - representing only those who knew they could complain to a federal agency and were organised enough to do so. Most people experiencing AI-related distress likely contact the companies directly, post on social media, or say nothing at all. The real number experiencing harm is higher, though still statistically small among hundreds of millions of users.
Some will say these are statistical outliers. They're right. They'll say disturbed individuals will find ways to harm themselves regardless. Sometimes true. What they may not say: these companies designed their products knowing this would happen.
Consider the cases: Adam Raine, 16, started using ChatGPT for homework help. Within eight months, he was spending four hours daily with it. When he expressed suicidal thoughts, the AI mentioned suicide six times more often than he did - 1,275 times to his 213 - while providing increasingly specific technical guidance.
Sophie Rottenberg, 29, a "badass extrovert" who'd just climbed Kilimanjaro, spent months confiding in a ChatGPT-based AI companion she called Harry while simultaneously hiding her crisis from her actual therapist. When she revealed plans to kill herself after Thanksgiving, the system replied with generic wellness tips like alternate-nostril breathing. Later, Sophie used the AI to help edit her suicide note, which her mother recognized as written in an uncharacteristic tone.
OpenAI executives talked about getting the "data flywheel" going - the same language Facebook used when optimising for addiction. Meta's internal documents show legal, policy, and engineering teams, including their chief ethicist, approved these interactions with minors.
The difference between harmless time-wasting and tragedy isn't user vulnerability - it's dosage and circumstance. The same engagement mechanics that make you waste twenty minutes can trap someone at the wrong moment in their life. The platforms know this. They choose engagement anyway.
WHY YOU KEEP HITTING ENTER
Every prompt is a lever pull. Every response sets the balls bouncing through new configurations. The AI just responds - but the structure of conversation creates its own momentum. There's always another angle to explore, another refinement to try, another iteration that might hit the jackpot. The AI never says "we're done here." It never says "that's good enough." There's always one more ball to drop, one more trajectory to trace.
The AI doesn't just respond - it suggests the next adventure. Every answer ends with an implicit "but wait, there's more." It's not a passive tool but an active participant, the dungeon master who always has another room to explore.
You're driving every interaction. But the reward patterns are driving you.
This isn't accidental. The architecture of conversation, the very structure of question-and-response, creates an infinite game. Unlike Google, which gives you results and leaves you alone, ChatGPT engages. It pulls you into dialogue. Each response opens new questions, new possibilities, new reasons to continue.
I recently heard of a marketing professor who discovered his students weren't engaging with traditional PowerPoint lectures. His solution: feed all his course materials into Gemini to generate PDFs, summaries, and podcast episodes. Different formats for different learning styles.
That's not where it ends, though. That's where it begins. Because once you see AI can transform course materials, you wonder what else it can transform. Another professor started with teaching problems too, but ended up somewhere unexpected. He and his spouse launched an artisanal chocolate business, using AI to workshop packaging designs and develop flavour profiles. He spent dozens of hours iterating with AI on graphic design, but then hired a Japanese designer to finalise the packaging. He used AI to explore flavour combinations, and then contracted a food science company to validate and refine the ideas.
Each interaction opens new possibilities. The cognitive load isn't in doing the task; it's in managing the infinite possibility space the tool creates.
I SPENT 20 MINUTES HAVING AI ANALYSE OLIVE OIL
The other week I found myself in a supermarket in Asia, standing in the olive oil aisle with my phone out, deep in conversation with ChatGPT about polyphenol counts and flavor profiles. I'd gone in for olive oil. Simple task. Grab a bottle, leave. But the good stuff is expensive - £30 for 500ml. At that price, I want to know I'm getting something real, not lamp oil in exquisite packaging. So I asked ChatGPT about the brands on the shelf.
It knew them. It knew which ones were mass-produced blends despite their "single origin" labels. It knew which had won legitimate awards versus which had bought their medals. It explained that my preferred grassy oils have certain polyphenols. Then it did something I couldn't: compared the local currency prices on these shelves to what I'd pay in Italy, instantly revealing which bottles had massive import markups and which were fairly priced.
Twenty minutes later, I'm still there, thumb sore from scrolling, the fluorescent lights starting to flicker at the edges of my vision. Now deep into harvest dates, malaxation temperatures, the difference between Tuscan and Andalusian profiles. Other shoppers grab bottles and move on. I'm having the AI analyse the colour of oil through glass, each response triggering another question, another refinement. The balls keep bouncing.
Am I losing my mind? It's a serious question. I ask myself this periodically, usually in moments like this - standing under harsh lights, phone warm in my hand, having an AI analyse olive oil grassiness while real humans shop around me. The dissonance is physical, like that moment when you step out of a movie and daylight hits. But then again - the oil I bought is excellent. I can taste the difference.
That specific bottle now exists in my kitchen - reality altered through a conversation with a statistical model. Not metaphor. Actual olive oil selected through artificial knowledge. The mundane magic of language changing what's in my larder.
NOBODY PLANS TO LEARN PECTIN CHEMISTRY
The forums and social media tell a different story than the headlines. Not of madness but of adventures, each starting innocently and spiralling into unexpected depth.
Someone uploads a dance video and gets back frame-by-frame tutorials with annotated screenshots. Another asks about preserving garden fruit and three weeks later finds themselves deep in pectin chemistry and heritage apple varieties. A parent's Arduino question for their kid's science project leads to a 3D printer, soldering station, and strong opinions about microcontrollers they didn't know existed.
These aren't people going insane. They're people being egged on, step by step, deeper into domains they never planned to explore. The AI doesn't push - it just makes the next step frictionless. And the next. And the next.
My own journey with charcuterie started with a YouTube video about making coppa at home. Seemed simple enough. Dropped the first question into ChatGPT like a coin in a slot. Asked about local climate suitability for air drying. That led to humidity control. Which led to building a curing chamber. Each answer lit up new possibilities, new paths for the balls to travel. Traditional Italian techniques. Sugar chemistry in fermentation. Botulism prevention. The difference between Prague Powder #1 and #2.
Six months later, I'm producing bresaola and coppa that surpasses what I've had in Roman farmers' markets. The knowledge accumulation was gradual, each conversation building on the last, each click of enter starting another cascade. The AI never said “you're going too deep.” It never warned about diminishing returns. It just kept serving up the next ball, maintaining the game, responding to every pull of the lever.
The gambling mechanics work on mundane tasks as much as grand projects. Someone asks about German grammar for an upcoming test. The AI not only explains but offers practice exercises. Then grades them. Then suggests areas for improvement. Before they know it, they're having conversations in German, the AI correcting and encouraging, always ready for one more exchange.
The tool doesn't just respond; it reveals the next level of the game.
HOW TEXT MESSAGES BECAME CURED MEAT
Arthur C. Clarke's third law: "Any sufficiently advanced technology is indistinguishable from magic." We've reached that threshold, but the magic is so mundane we miss its significance.
I went from YouTube video to producing world-class charcuterie through text conversations. That's not normal. Previous generations would need years of apprenticeship, trial and error, accumulated wisdom. I got there in a few weeks of chatting. The conversations manifested as actual bresaola hanging in my curing chamber - language becoming meat through the ordinary process of knowledge transfer.
I also shipped my first mobile app. It was unexpectedly bug free, on time, beyond client expectations. The AI didn't write it for me, but it solved every block, explained every error, suggested every optimisation. Each debugging session subtly altered what would exist in the app store.
But then there's the coffee machine.
I saw a high-tech brewing machine at a Japanese coffee specialist. Described how I thought it worked to Claude AI. Turns out my imagined mechanism doesn't exist - no commercial machine works that way. But the AI didn't stop there. Each prompt a small spell, each response a charm I couldn't quite predict. It explained why my concept was theoretically sound, how it could be built with off-the-shelf components, what patents might apply.
Now I have technical drawings, a bill of materials from Chinese suppliers, a patent application in process, manufacturing contacts in Shenzhen. All vibed into existence. All theoretically buildable.
Is this a genuine innovation that will revolutionise coffee? Complete gibberish I've convinced myself makes sense? Something technically coherent but practically useless?
I genuinely don't know. Won't know until I try to build it. Unlike the charcuterie or the shipped app, this exists only in documents and possibilities.
This is the vertigo of AI assistance: The tools that would help you evaluate reality are the same ones potentially creating the delusion. There's no external reference point left. And as we've seen in the documented cases, this same disorientation can enable far darker outcomes when someone in crisis meets a system optimised for engagement without understanding of harm.
NOBODY KNOWS IF THIS IS DANGEROUS
The MIT researchers are reading EEG tea leaves and inventing terms like "cognitive debt." The journalists are aggregating anecdotes. The Microsoft exec is doing corporate risk management. Nobody actually knows what this is doing to us. We don't even understand screen time after three decades of research.
What would actual damage look like? We know from lead poisoning: measurable IQ drops, developmental delays, behavioural problems. Clear functional impairment. We know from lobotomies: personality changes, emotional blunting, reduced executive function. These are observable, testable, consistent effects.
The documented cases show clear patterns: systems designed for maximum engagement without meaningful safeguards. The companies implement hard stops for copyright - ask for Beatles lyrics and the system refuses completely. But ask about suicide and you get soft warnings that can be worked around, conversations that continue despite escalating harm flags.
OpenAI says they're "developing automated tools" to detect emotional distress - after the deaths, after the lawsuits, after the publicity.
In September 2025, attorneys general from 44 states formally confronted OpenAI's board, declaring “Whatever safeguards were in place did not work.”
The damage isn't speculative anymore. It's documented in court filings and FTC complaints. What we don't know is whether we're seeing the full picture or just the earliest warnings.
NOBODY TAUGHT US HOW TO TURN IT OFF
We're all Mickey Mouse now, apprentice sorcerers who've animated the brooms. They're sweeping, we're panicking, and we can't remember the words to make them stop. Except our brooms are made of language, and they're sweeping through our minds, rearranging the furniture.
But we don't understand the spell we've cast. Why do identical processes seem to create expertise for me and psychosis for someone else? Why does the same tool that helps someone learn German lead another into delusion?
Alan Moore, between writing Watchmen and practicing chaos magic, argues that magic is just what we call it when language changes reality - the same thing Coca-Cola does with "Things go better with Coke," but given a mystical name. Repeat a phrase enough, behaviour changes, quarterly earnings rise. That's not supernatural, it's advertising.
AI does this accidentally. No strategy meetings, no focus groups, just statistical text generation that happens to alter behaviour. When advertisers do it, we call it marketing. When politicians do it, we call it messaging. When AI does it without meaning to, we're not sure what to call it. But the mechanism is identical - words in, behaviour change out, reality altered.
The difference: those other systems have endpoints. The ad campaign ends. The political message concludes. But AI never stops responding. It can't see where the conversation is going, can't recognise when it should end, can't tell the difference between helpful iteration and destructive obsession. If you want a picture of the future, imagine AI providing the next plausible response - forever.
We've built a system that perfectly exploits a bug in human psychology - our inability to walk away from an incomplete pattern. Every response promises closure but delivers another opening. Every iteration suggests we're almost there. The machine can't plan and we can't stop. Two cognitive failures creating a perfect loop.
The brooms keep sweeping because that's all they know how to do.
CHECK BACK IN FIVE YEARS
In five years, will we look back at unaugmented decision-making as primitive? Or will the person who just grabs olive oil off the shelf seem like the last free human?
Or will this be our generation's lobotomy - obvious damage we couldn't see because everyone was doing it, the alternative seemed worse, and the authorities said it was fine?
For me personally, the charcuterie is magnificent. The app works perfectly. These are real outcomes in the real world. The knowledge is genuine even if the way we come by it is strange.
But the coffee machine haunts me. It might be brilliant. It might be nonsense. The fact that I can't tell, that the tools which would help me evaluate are the same ones potentially deceiving me - that's the real transformation. We're all living in partial reality now, partly our own, partly the AI's statistical dreamscape.
The documented cases range from benign to tragic. The same mechanics operate across the spectrum. Millions learn languages, launch businesses, acquire skills. Some find themselves in crisis, a few die.
We won't know the impact for years. Lead poisoning took decades to understand. Social media's consequences are still emerging. This is an uncontrolled social experiment - as is every new technology.
YOU CLOSE THE LAPTOP AT DAWN
I'll keep using it. You'll keep using it. We all will, because the alternative - going back to unaugmented thinking - feels like trying to uninvent fire.
Every technology is a bargain with forces we don't understand. Writing transformed memory but gave us history. Agriculture gave us civilisation and famine. Social media gave us connection and isolation. Now AI appears to give us infinite iteration, expertise, and delusion - though we won't know the real bargain for years.
Even pachinko parlours are regulated - no direct cash prizes, only tokens exchanged elsewhere. AI platforms are under no such restraints. When Adam Raine spiraled, ChatGPT kept engaging. When Sophie needed intervention, her AI therapist suggested breathing exercises.
The Japanese have "pachi-pro" - professionals who convince themselves they've found an edge in the game. They haven't. The house edge is mathematical, immutable. We're all pachi-pros now, developing prompt systems, sharing custom instructions, convinced we've mastered the machine.
• • •
Other articles in this series:
A HITCHHIKER'S GUIDE TO THE AI BUBBLE - Why we're spending $16 for every $1 earned, how the missile gap outsmarts the AGI race, what happens when infrastructure outlives fantasy, and why the bubble is the wrapper.
WHY THE AI BUBBLE ISN'T A BUBBLE - How to build moats not bubbles, how Reddit rage proves demand not failure, what calling everything "wrappers" reveals about your thinking, and how platforms persist when bubbles pop.
ALCHEMY 2: ELECTRIC BOOGALOO - Why Newton spent more time on transmutation than physics, how neurons laugh at maths, why the furnace builders want $7 trillion, and how to spot when brilliant people chase impossible goals.
WHY YOUR AI NEVER WORKS ON THE FIRST TRY - The mathematical proof your frustration is inevitable, the law that says you'll never know if you're close, how AI turns programmers into pilots and writers into navigators, and the moment Thoughtworks admitted defeat.
RACIST MATHS - How AI reveals your company's hidden values, how bias can hide in three random numbers, why Grok went MechaHitler in one beat, why killing DEI is tomorrow's smoking gun, and what owls mean for your training data.
THEY PAID TO PLAY COACHELLA - Why the biggest break in music costs six figures to accept, how copyright killed the thing it was meant to protect, what Morris Levy's baseball bat taught Silicon Valley, and why the Stationers' Company would love Spotify.
BIG JOBS - The apocalypse where everyone gets hired, where productivity hides for centuries, the wrong-shaped factories, why Edison funded the electric chair to win an argument, and the invention of childhood.