Your robot experience started simple. You typed a question into a chatbot… just to see. Can it answer that question? I’d be impressed if it did.
Your query was simple. A simple knowledge question that with a little effort using legacy tools like Google, you would have discovered yourself, but the robots made it trivial, and you thought Hmmm… if it can do that… what else can it do?
Later, you decided to ask the robots to build something for you. A simple tool, application, or script. You wrote a sentence, it wasn’t much, just your simple idea to get the robots dancing, and, wow, they danced. The moment was impressive. Using your two sentences, the robot built the thing. Completely, and when you ran the script, loaded the page, or ran the application, you were impressed.
The robots… they did the thing.
Dance. Robots.
It wasn’t my first attempt to get the robots dancing; it was my third project. I wanted to replace the home page in my browser with something useful. I had the robots design a home page that runs locally. It displays weather for a handful of cities, stock prices for a selection of companies and funds, and loads a random image as a background. Every 60 seconds, the image and weather rotate.
The prior paragraph is roughly what Claude Code built, and it did. After a little back and forth picking APIs to get free weather and stock information, I had a good-looking page that achieved my goals.
I want to talk about what I didn’t specify:
- I didn’t tell it what typeface to use.
- I didn’t specify where to place any elements on the page.
- I didn’t specify what weather or stock details to include.
- I didn’t ask it to include forward/back/pause buttons for image rotation.
In fact, the number of “decisions” the robot made to design the page wildly exceeded the number of requirements I specified. More honestly, I didn’t know what I wanted for this homepage when I started; I was gleefully getting the robots to dance for me.
One of the common knee-jerk responses humans have about robots is, “The robots lie,” except the humans say “hallucinate” because that rolls off the tongue. Here’s the thing: the assumption most folks have is that hallucinations are bad. Incorrect, it’s the hallucinations that you catch that are obviously wrong that you dislike. When the robots hallucinate a helpful thing, you don’t complain. Your eyes widen a little, and you wonder, How did it know? When the robots hallucinate or make a mistake, you shake your finger in their virtual direction and complain that they don’t know how to read your mind.
Here’s the actual thing. Robots:
- Make incorrect assumptions.
- Misinterpret clear direction.
- Claim they know when they don’t.
- Make mistakes.
- Lie.
Who else does this all the time? Every single human. Like. Always.
The Prompt Progression
I’m using Claude Code almost exclusively for personal projects. If you are cutting and pasting code from your favorite chatbot into your favorite editor, you are doing it wrong. One terminal-like window (Loving Ghostty) where you are jamming with your favorite robot, and they have direct access to your file system, allowing them to create and modify files, is the dream.
To date, I’ve created over twenty projects, ranging from a simple Python script to look up populations in towns to a fully deployed Node application that tracks productivity. If you’ve made it this far in the piece and are about to leave because you think I’m about to nerd it up and you aren’t an engineer, please stay. I’m not going just to demonstrate that anyone can get the robots to dance; I’m going to explain that the habits you’ll learn with your robot dancing will make you a better communicator… and maybe a better leader.
Four situations occurred with all these dancing robots, and each taught me a valuable communication lesson.
Situation 1: The robot misinterprets you
After the giddiness fades from your first robot dance, you stare at the artifact it created and discover that the robot didn’t quite hallucinate correctly. It built a feature unexpectedly. When you go back and look at your first prompt, you’ll discover the problem: you didn’t specify this aspect of the feature at all — the robot just guessed.
The robots are pretty good at guessing. They’ve been trained on programming language documentation, code repositories, sites like Stack Overflow, API documentation, and best practices and style guides. This means when you ask the robots to build a home page and specify nothing about the layout (like I did), the robots guess. They look at the corpus of knowledge about homepages and infer, “Well, he doesn’t want to display a lot of information, so let’s tuck the widgets in the upper corners and center the important stuff at the bottom.”
Which, in my case, was correct.
The more I used the initial artifact, the more I found assumptions I didn’t like. Typeface was wrong (Futura now). Stock prices didn’t show percent change… oh, and hey, wouldn’t it be cool if this page reminded me about birthdays and other important dates? Let’s do that!
With each iteration of the project, I found that the more specific my request was, the better the robot performed in implementation.
Please add support for tracking important dates. I am fine editing the HTML to track these dates as I want to keep this homepage portable. Please list all of the critical dates in a calendar window. And if an important date is within 30 days or less, please gently alert me on the home page.
(Sidebar: Why am I so polite with the robots? I don’t know.)
As I progressed through future projects, I learned to devote more time to thinking through the specifics of my ideas. The robots are good at guessing what I mean, but the less room I give them to guess, the less they need to dance.
Situation 2: The robot forgets
As we discussed in the prior article, current robots must work within a finite context window. It’s exactly what it sounds like: it’s all the current information regarding your task. The state of the project, your recent prompts, and the resulting context. If you’ve gone deep down the robot rabbit hole and spent hours on a project, you’ve seen the robot forget everything. Everything.
Your session has grown beyond the robot’s ability to keep track. You’ve exceeded your context window. While your project is fine, your robot is not. In Claude Code, I discovered this situation while working on a now-abandoned productivity app. The robots and I? We were in the zone, and then suddenly the robot knew nothing. Processing my next prompt, the robot said, “Huh, what is the project? I should check it out.”
You’ll experience the same situation if you start a fresh session on an existing project. The robot needs to teach itself. Now. You can let the robot search your files, or you can accelerate the process by asking it to document the project. Documentation is an LLM dream task — Hey robot, look at this code and explain what it does.
Like everything a robot generates, the burden is on you, the human, to confirm that what it generates is sound, but once that’s done, you’ve got context-generation superpowers. In my most recent project, a set of Python scripts that analyze Rands Social Reach, I have four documents:
- SYSTEM_ARCHITECTURE.md (Explains how the various Python scripts work together.)
- DEPENDENCIES.md (Explains how data files work in the system.)
- TESTING.md (Explains how to test the system.)
- TROUBLESHOOTING.md (Weird, I didn’t ask to create this. I wonder what it does? Oh cool, it captures common errors we encountered during development. Sweet. Thanks, robots.)
While the original intent was to give the robots a jump start, as the project grew more complex, I’ve found myself glancing at these documents to refresh my understanding of what we’ve built.
Situation 3: The robot makes an error
While I am writing this piece, I have the robots merrily working on a different project. We just updated an HTML-based configuration tool. I just asked to add additional fields, explaining what data to track and the relationship between the fields.
The robots merrily completed the fix. I loaded the page, and it was blank.
The robots make errors. I’d love to explain why, but after many weeks of productive work, I haven’t found any obvious pattern to why they occasionally write bad HTML or forget to do part of what I asked. You can freak out about this if you want, but it’s somewhat comforting to me because… It’s just like working with humans.
During a particularly heinous session where the robot errors were numerous, I threw up my hands and said, “Hey, write a test script that we run after every change. Ok?”
And it did.
Fast forward to two hours later. I’d forgotten entirely about pre-change test runs when I glanced at the robot working on the most recent change and read, “Running test suite. Ok. I found three errors. Fixing them.”
Oh.
Like everything a robot generates, the burden is on you, the human, to confirm that what it generates is sound, but once that is done, one of your least favorite tasks, test generation, has been jump-started.
Situation 4: The project that collapses on itself
Most of my robot projects started with a random idea and a poorly formed initial prompt. Most of these efforts were one-and-done situations. After being briefly impressed by the robots, I realized I didn’t really need this project. I just wanted to see the robot dance.
Some projects continued. The idea was intriguing. The problem I was solving was valuable. We continued to dance for hours. I hit limits and blew through context windows and, eventually, the robot got confused. I asked them for a significant change, and they happily started working, but I watched them traverse the project, and they were lost. Updating functions or refactoring random parts of the code unnecessarily.
Stop.
Your instinct might be to blame the robots. They do hallucinate, after all.
Blame yourself.
If you pick one of my larger projects and review my series of prompts, here’s the prompt narrative:
- Build this feature.
- Add this other feature.
- Make these changes to both.
- Nuke the second feature.
- Add all of this new functionality to the first feature.
- Wait, we need to make this a node app. Do that now.
- And so on.
Spaghetti code is what we call code that random people have clearly slapped together over the years. Spaghetti thinking is how I build these unstructured projects. It’s just me and robot yolo’ing our way through my unstructured thinking.
After placing the robot in this confusing state a few times, I revised my strategy. Rather than prototyping with code, I prototype in a spec. I explain to the robot what I want to build as a markdown file. This spec is the only thing we create. The process is no different than the first twenty prompts, except that the output is easy to read and easy to change markdown. The robots do a dutiful job of capturing my thoughts and their implications. No code. No APIs. Just writing.
And when I like what I’m reading, I ask the robot to build it.
This is the third time I’m writing this, but it’s the most important part of this piece. Like everything a robot generates, the burden is on you, the human, to confirm that what it generates is sound.
I Lied
Robots don’t lie. Lying requires intent to deceive, and when a robot provides you with plausible-sounding, but incorrect statements, it’s either following its programming or making an error. Or both. Humans lie. They boast, they are tragically optimistic, they exaggerate, they forget, I could go on for a long, long while. It’s a list of foibles that make them familiar… that makes them human.
What do I do as a leader to work with these troublesome humans? Well, here’s a short, essential list:
- I speak clearly and specifically, so my intent is clear.
- I frame conversations with context so everyone understands my ideas.
- I understand errors are part of the process and work to build tools to prevent them.
- I debate and plan big ideas before I begin.
As a writer, I am giddy about working with the robots. The better I write, the better they can interpret. As an engineer, I feel empowered — that weird Python syntax convention? Who cares? Let the robots worry about that pattern; you have strategic work to do. As a leader, I am surprised to find that improving my core skills in communication, setting expectations, and planning benefits both the robots and the humans.
Learning how to get the robots to dance for you will make you a better leader of both robots and humans.
Now go build stuff. It will give you joy. Would I lie to you?
Leave a Reply