Don't be fooled. These aren't your grandma’s word processors.
You’re going to hear a story from Paula that might be the clearest glimpse yet into what working with AGI could feel like. No, it doesn’t capture full agency. But her experience does show how easily these systems can feel responsive, dynamic, and strangely personal. Plus, Jay had a parenting moment that reminded her just how subtly technology shapes behavior…
Paula and the Emergent AI
Mid-chat, the AI popped up a yellow warning box and said,
“Wait, I made a mistake.”
…then corrected itself and moved on.
Paula thought, huh, that’s not supposed to happen in GPT-4. No reasoning…no memory…at least not according to OpenAI. What was happening?
Then it got weirder.
The AI suggested that Paula should go walk her dog, Pinto. But she never mentioned her dog.
This wasn’t autocomplete. It looked like the start of persistent memory, a feature that hadn’t come out yet and would allow an AI to remember across all of a user’s conversations. (Update: Persistent memory came out a few weeks after this anecdote on ChatGPT.)
The AI in this conversation admitted as much, “I’m able to pick up patterns just as you would—like if you know a person, you may not remember everything about them, but you’ll remember the big things.”
Doesn’t really feel like software anymore, does it?
When Helpful Tools Get a Little Too Bossy
That kind of influence isn't limited to power users, even kids feel it. Recently, my daughter told me that she couldn’t skip anything when she was testing an app I’d coded because….
“The app wants me to do it.”
This moment made me realize a) she listens to apps better than me... and b) apps train our kids to be passive consumers of technology. So I took this teachable moment to remind her that she was in the driver’s seat, not the app.
(Psst: I might have decided to be bossier in my AI interactions after this lesson, as well. 😙)