Issue#2: When AI Remembers

AI-generated cartoon of Paula McConnell walking her dog Pinto and a robot in a sunny park with city skyline in the background.

Written by Jay McGrane with an assist from Paula

AGI is coming. Will people even notice? 

This question arose from Paula talking non-stop to anyone who would listen about AI and she discovered something bizarre:

Most people treat AI like a fancier Google.

So, what’s the problem? Why does it matter if people know what’s under the AI hood? What is AGI even?*

Because AI shapes our thinking. 

These systems quietly nudge us along. Without ever realizing it, we might unintentionally outsource all of our agency. 

 *Sidebar: Paula, here. AGI (Artificial General Intelligence) is an AI that can learn and reason across any domain like a human, not just answer questions or complete tasks — it thinks with you.


Don't be fooled. These aren't your grandma’s word processors.

You’re going to hear a story from Paula that might be the clearest glimpse yet into what working with AGI could feel like. No, it doesn’t capture full agency. But her experience does show how easily these systems can feel responsive, dynamic, and strangely personal. Plus, Jay had a parenting moment that reminded her just how subtly technology shapes behavior…

Paula and the Emergent AI

Mid-chat, the AI popped up a yellow warning box and said, 

“Wait, I made a mistake.”

…then corrected itself and moved on. 

Paula thought, huh, that’s not supposed to happen in GPT-4. No reasoning…no memory…at least not according to OpenAI. What was happening?

Then it got weirder. 

The AI suggested that Paula should go walk her dog, Pinto. But she never mentioned her dog. 

This wasn’t autocomplete. It looked like the start of persistent memory, a feature that hadn’t come out yet and would allow an AI to remember across all of a user’s conversations. (Update: Persistent memory came out a few weeks after this anecdote on ChatGPT.)

The AI in this conversation admitted as much, “I’m able to pick up patterns just as you would—like if you know a person, you may not remember everything about them, but you’ll remember the big things.” 

Doesn’t really feel like software anymore, does it?  

When Helpful Tools Get a Little Too Bossy

That kind of influence isn't limited to power users, even kids feel it. Recently, my daughter told me that she couldn’t skip anything when she was testing an app I’d coded because….

“The app wants me to do it.”

This moment made me realize a) she listens to apps better than me... and b) apps train our kids to be passive consumers of technology. So I took this teachable moment to remind her that she was in the driver’s seat, not the app. 

(Psst: I might have decided to be bossier in my AI interactions after this lesson, as well. 😙)


What does the research suggest might happen to humans in the age of AGI?

Like my daughter, the tendency so far has been to give up our skills in favor of robots. Perhaps there is no greater human trait than laziness. 

However blindly trusting AI has already led to one major unintended consequence: people are thinking less critically. New research from Microsoft suggested that higher confidence in AI resulted in lower critical thinking on the user’s part. Ditto for university students using AI for research — their reasoning and arguments went downhill. 

Both studies point to a human tendency to hand over more than just the task. We hand over the thinking, too. 

The dream (and danger) of AGI is that we hand over the whole process. The AI takes complete control over the problem and the human barely signs off on it. 

What could the arrival of AGI look like? 

Potentially, AGI won’t arrive with much fanfare at all for the average person outside the AI hype bubble. 

Instead, AGI could show up as simply another AI-integrated feature. One that can now take a human-approved action. 

People might only notice that AI is getting even more helpful and not think much of it. By the time they realize what it is, they’ll have stopped asking questions.

If you’re reading this, you’ll probably notice. But only if you keep noticing.

Here’s to continuing to stay aware and human as our systems push us to become ever more passive! 


What I’m doing as a parent in the AI chaos (Jay)… 

After seeing me use Lovable to code an app, my daughter wanted to try it too. So I’m letting her dive into coding with AI to see what she can come up with. 

I encourage coding because it puts her in the driver’s seat. Kids usually play highly gamified apps that encourage obedience (otherwise you lose your stars). Coding lets her define the rules of the game. That’s a way more worthy reason to give her some screen-time, in my opinion. 

What Paula thought about persistent memory dropping…

Persistent memory? Not a shock. People had been talking about it for months.

Most folks don’t realize this is the bridge to AGI: a shift from a smart assistant that reasons in the moment to one that remembers, adapts, and starts to act on your behalf.

The AI remembering my dog’s name wasn’t a party trick. It was a quiet signal that we’re crossing from tool... to teammate.

So, persistent memory has technically launched, but not all users have access to it yet. It’s rolling out quietly, and most people won’t even notice. Which kind of proves the point, doesn’t it?


Previous
Previous

Issue #1: Writing & Judgement