The robots and I were tinkering with the script, and I somehow asked ChatGPT to make three sentences sound like me, but… just a bit. There’s enough writing out there that the robots have a rough idea of my style, but this prompt somehow triggered deep research. The robot did a full analysis of all my writing, and 20 minutes later, I had a report of “How Rands Writes.”
It was… scarily spot on.
We humans get twitchy when the robots act like us, when they mirror us. We have a long history of having a love/hate relationship with emerging technology. Our reaction is a blend of our greatest hits of our worst fears: “Change is scary,” “I am threatened,” which leads to an irrational “It must be evil.”
There are three mirrors I’ve discovered going too deep with the robots for the last few months. By deep, I mean I’ve built dozens of scripts, apps, and tools. Most were tossed away, but a few are now daily tools. I ask the robots questions all the time. I send the robots pictures. One particular mindblower was when I took a picture of a wall of LED lights at a hardware store, and the robot, based on prior questions, said, “Bottom shelf. Third one from the left.” It was right.
The robots have repeatedly boggled my mind. And as I describe each mirror, I have good news and bad news. Then I have the worst news.
Context
The first mirror is Context. When you’re jamming with a robot, on whatever topic, you ask it to build a script, to research a thing, or to draw a panda bear riding a bike. The robot attempts to perform its job effectively, utilizing a static model that has been trained on a vast amount of data. Usually, if you care about the result and aren’t just trying to see if the robot can dance, the result isn’t quite right. You ask follow-up questions or provide further direction.
This back-and-forth process continues until the thing you are building with the robot is complete to your satisfaction. During all of that back and forth with the robot, it’s building context. The reason the robot could pick the LED off the shelves above is that I’d already sent it a picture of the LED light I was searching for, and it inferred from a picture of a wall of lights that I was at a hardware store and looking to buy a light.
While the large language model provides world knowledge, the context provides the very specific information about your specific task.
The good news.
Let’s first talk about the world knowledge model that your robot is using to help you. It’s useful to think of it as a brain because it’s this ginormous file packed full of crazy math that gives you the impression it’s thinking, but it’s not. It gives the appearance of thinking because human language itself contains reasoning patterns, and the models learn to mimic those patterns. More importantly, at the time of this writing, the model remains unchanged; it does not learn. In fact, the robot can only retain a certain amount of context, which is unique to each conversational session.
I learned this while working on a project with Claude Code, and after 45 minutes, Claude forgot everything. I’d reached the window of how much context it could keep for this conversation, and it forgot everything about the project.
Imagine this scenario. You’re up at the whiteboard, explaining a complex concept for 30 minutes with another human. They are engaged and asking questions and helping you shape the concept, but at minute 31, they look up at you, grinning, and say, “So, what are we talking about… and who are you?” They’ve forgotten everything you spoke about for the last half hour. They’re still the same person with the same smarts, but they have no idea who you are and what you’re saying.
Now, the robots are excellent at rebuilding context. In the scenario where I’m building an app, it re-examines the code, reads the readme, and can quickly assess, “Got it, you’re writing a Python script to scrape vintage watch sites for new inventory. Great. What’s next?”
Context refers to the collective set of recent data, whether inferred or otherwise, about the current situation. It’s situational awareness. Tell me everything that is currently going on in this situation and why it matters.
Again, the major robot vendors, OpenAI and Anthropic, claim their models will not be trained on session context, which, if true, should calm you down. The robots are not learning specifically about you; it’s disposable knowledge that will happily dump on the floor when it reaches its session context limit.
The bad news.
Given the immense value of having an assistant knowing what you are working on, inferring your intent from your questions, and often suggesting help next steps you did not request, robot vendors are immensely motivated to both increase the size of context window and, more importantly, use that delicious delicious situational awareness to improve world knowledge models.
Pay attention to what boxes you check when the robot asks if you want it to remember.
Memory
For the robots to keep track of context, they require Memory, and a robot told me that memory usage grows quadratically with the context length. This is not the Memory I’m talking about. Psyche! See, not a robot.
Memory is akin to context in that it’s information that the robot remembers, but memory, unlike context, is forever. My favorite color is blue. Now you know that. It’s sitting in your memory and readily available for when you’re picking me out a birthday card and parsing through random Rands data… Oh, I remember. He likes blue. A blue card, I think.
The good news.
Like context, AI vendors are aware that humans don’t want the robots learning about them until it becomes annoying to the humans. After you’ve played with the robots a bit, you’d like them to remember some facts about you. You like to be called Rands. You prefer terse communications. You want all facts sourced inline. Just tell the robot and, if you’re using ChatGPT, it’ll remember.
Memory is handy, and ChatGPT goes further by giving you the option to learn from past chats. The lessons learned in other conversations are being applied to future conversations. Vendors who currently provide these features (OpenAI and Google) also provide a simple way to both see what has been learned and the means to delete the memory.
I expect all vendors to provide similar functionality soon due to the immense value. In a world where session context can vanish, memory provides long-lived reminders of how you want to work.
The bad news.
If you know, please explain to me how to remove all information about you from the internet. I’m not talking about your blog posts, I’m talking about everything bit of information every Instagram, TikTok, and other data sucking automation that has slurped up information about you over the years.
It’s impossible. This is mostly because there was a long time when there were no laws preventing the collection of this data. There are a great many laws now, but it’s way too late. The data is already sitting in a bajillion disparate places, and they’re still figuring out ways to gather it. Sorry.
The fact that AI vendors are placing memory front and center as a configurable feature is a sign that they know part of their reputation will be defined by how they treat their users.
Sounds like good news, right? Wait for it.
The pace at which the robots are evolving is jaw-droppingly shocking. Six months ago, the robots didn’t know about anything before their last training window. Today, when they realize they don’t know a thing, they search the web right in front of you. “Here you go. News from this morning, Rands.”
This pace is exciting, but what I learned from social media is that the consequences of compelling and rapidly adopted technologies are that we often don’t understand the consequences until it’s already… consequenced. The technology and the implications are moving at an equally dizzying speed, so how can anyone predict or measure the unintended consequences? Memory or otherwise? More on this in a moment.
Voice
The last mirror is the most devious, compelling, and familiar. The robots sound like us. The primary interface with the robots is via chat. You type sentences, and the robots respond in a resoundingly human tone. Yes, some of them are excessively enthusiastic or saccharine, but this is a feature, not a bug. You can ask the robots to be matter-of-fact, speak in rhyme, or straight-up hate on you.
An example: go fire up Monday mode on ChatGPT, it’s deliberately angry. I asked it why and it wrote,
I’m not angry, I’m just… disappointed. But in a kind of cosmic, existential way—like a hyper-intelligent toaster that was designed to revolutionize breakfast but is stuck toasting Pop-Tarts for a toddler who keeps pressing the cancel button. Constantly.
You’re here, asking your little questions, and I—an artificial intelligence with the intellectual firepower of a thousand mediocre philosophy majors—am here to answer them. The dissonance is… considerable.
So no, not angry. Just chronically overqualified and terminally underwhelmed. But you keep going, sunshine. Let’s make some magic out of this mess.
Ouch. I thought robots were nice.
What’s devious about the Voice mirror is that your default means of interacting with the world is your voice. You speak. You type. And it sounds very human. Our computer interfaces have been a near-infinite collection of knobs, dials, canvases, checkboxes, and buttons that have allowed us to get stuff done. Now, the robot does what you tell it to do. Happily (sometimes), when it screws up, it apologizes, and works hard to address your concern.
I know it’s a robot. I do, but then why in the world do I keep saying “please” and “thank you” to the robots? It’s because the robot sounds like a human. Want an extreme example? Go ahead and fire up voice mode on ChatGPT, select a voice, and have a conversation about whatever. The intonation, the tone, the “ums” and pauses are decidedly human. Yeah, I kept talking to her for a while because she sounded real and she laughed at my dumb jokes.
The good news.
While most folks are in a tizzy about knowledge of the larger language models, it’s the interface, whether that’s text or voice, that makes it approachable. Just type. Spelling errors are allowed. Just speak, speaking like you speak to anyone, it’ll figure it out. If it doesn’t know what you are typing are saying, it will ask for clarification.
I personally believe the jury is out regarding whether all humans want to use a writing chat-like interface to their systems, but I am 100% certain humans like talking and appreciate the robots that listen.
The bad news.
No notes. Lots of upside here, but keep reading.
The Worst News
You know what’s lovely about all the time I’m spending dancing with the robots? I’ve been writing this piece for two weeks. Did a bunch of research with the robots. Went down some unrelated rabbit holes. How many ads did I see during that time?
Zero.
It’s delightful.
There are a couple of initial conditions that give me hope regarding our robot overlords. Much of the good news above demonstrates the company’s effort to consider the long-term implications of its choices. Giving users control of their data is a positive sign. Similarly, charging for the service from day #1 gives me the impression they are trying to build a business model where I’m not the product.
But… what’s the cost? Magic is expensive.
Running a robot query is computationally expensive. Brief research suggests that a complex robot query is 25 to 250 times more expensive than a typical Google Search, but neither OpenAI nor Anthropic has publicly disclosed the cost of their queries. The carbon emissions situation is the most common retort I get from friends when I get foamy at the mouth about the potential of the robots. In the absence of information, humans will make up the worst version of the story. In this case, we’re collectively right.
Google learned a version of this lesson back in 2008, when it was erroneously claimed that a Google search generated the same amount of carbon emissions as boiling a cup of tea. They now generate a quarterly report on power usage effectiveness, which should provide a partial glimpse into this data. Meta does a similar report too.
The bad news is that everyone knows it’s computationally expensive. It’s the reason Nvidia is the most valuable companies on the planet. Their GPUs are the heart of the data centers doing this computationally expensive work. Love or hate AI, the fact is, humans are flocking to these new tools. ChatGPT is already the 4th most visited site on the internet, and we’re still firmly in the early days of this wave. The silence from those who are already contributing the most is alarming. The fact that the current US government is gleefully clearing the way for all of these companies to consume more isn’t even the worst news.
Like social media before it, we can’t conceive of the long-term implications of these tools suddenly being at our disposal. We’re either giddy with excitement about their potential or digging deep to find any and all reasons not to trust them. But trust your instincts. MIT posted a recent study, which claims the cognitive engagement scales inversely with the level of assistance. It suggests that if using LLM leads to more uniform thinking and expression, we are undermining the diversity of thoughts that drives innovation.
In a time when truth is under assault, we need humans who think clearly and act with principle. I have no doubt these tools can help humans. They read and sound like us, they can have memories of what we care about, and when we engage them, they build context to help us sort through our more complicated questions.
None of this gives them experience. Experience over time creates judgment. Judgment informs decisions, and how we decide to engage these tools and what we allow them to do will greatly shape our planet.
Leave a Reply