
A few months ago, I asked ChatGPT to “remember” that my 7-year-old son, Max, is a sports fanatic. We were stuck in traffic on a long drive, and my best bet to delay the inevitable iPad handover was to get ChatGPT to generate math word problems. I added, “Use sports in the examples, especially basketball, hockey, and soccer. And teams like the Knicks, Sharks, and Real Madrid.”
ChatGPT nodded (metaphorically). The little memory dots pulsed in agreement. We spent the next 20 minutes happily figuring out how many players you’re left with if you start with five Knicks starters, add two subs, and then someone fouls out. Victory.
ChatGPT really took that memory to heart.
Since that April afternoon, I have not been able to escape the Knicks. They sneak into travel itineraries. They pop up in analogies while I’m doing diligence on robotics startups. Last week, I asked for an image of a “sticky spider web” to accompany a blog post on systems of record and the spiders were all wearing Knicks jerseys. It took multiple prompts to get them to take off the uniforms.
I’ve even told ChatGPT to forget Max’s fandom. No luck. Like that one friend from middle school who will never stop bringing up your misguided talent show solo, ChatGPT is fully committed to my past. It remembers Max’s fandom and, by association, mine.
This is no doubt a minor bug, destined for the fix-it list. But it’s gotten me thinking more broadly about memory in AI.
At USV, we’ve been deep in conversation about how memory—specifically, personal data aggregated over time—might be the golden ticket in this new platform race. The product that remembers us—our preferences, our quirks, our contradictions—may be the one that wins the longevity game.
But in that future, what happens to the beauty of forgetting?
In a hyper-personalized ecosystem, how do we keep our preferences dynamic and implied rather than rigid and engraved? What about the ones that are under the surface, still forming, not yet ready to be named?
@Teknosaur on X nailed it, referencing Inside Out: memory isn’t passive storage; it’s “an active, emotionally weighted system.”
So maybe the best AI memories won’t be flawless databases. Maybe they’ll be artful editors, skilled at remembering imperfectly. Knowing which signals are core, which are passing whims, and which should fade gently into the background.
Ultimately, we want our tools to know us better than we know ourselves. But we also want the freedom to evolve, change our minds, and shed old identities (without having to explicitly declare that we’re now, god forbid, kind of into the Warriors.)
As AI gets more personal, I’m hoping it learns not just what to remember but what to quietly forget.
Share Dialog
Rebecca Kaden
Support dialog
No comments yet