Tech Life No one cared if I did nothing

Crazy Charlie’s Window

Second year of UCSC. Decades ago. A predilection for Domino’s Pizza and a fundamental misunderstanding of how credit cards worked left me in a financial hole. I needed a job, any job, and the School of Social Sciences offered a job aligned with my computer science degree, lab technician.

A solid interview with a prompt job offer. This is going to be easy.

No, this is going to be boring.

The lab technician role was among the lowest rungs on the Social Science ladder. My boss was a tenured professor who was somehow protecting grant money by hiring a minimum-wage lab technician. Her description of the role during the interview versus what she described on my first day had little intersection.

“Make sure the lab is tidy. Make sure the terminals work. And whatever else you want to do. You have a storage room where we store old terminals and other random hardware. You have an office. It used to be Crazy Charlie’s. It’s a mess.”

The lab was a large basement full of rectangular tables, each with two terminals on top. I never saw a human use the lab in my first two weeks on the job. The pre-PC terminals were heavy and ugly. A quick assessment revealed that 50% of them refused to turn on.

And that was the good news.

The storage room was long, perhaps twenty-five feet deep, with ceiling-high racks on both sides. These racks were jam-packed with discarded terminals, cardboard boxes full of who knows what, random bits and bobs of technology, and then just random crap. I could get halfway to the back of the room before being stopped by overwhelming clutter. My assessment: random humans had been leaving and piling random crap in this room for a decade.

And that wasn’t the worst news.

My “office” made the storage room look tidy. The prior occupant, described to me by multiple people as “Crazy Charlie,” had apparently been a hoarder, and this office, roughly half the size of the computer storage room, featured a stunningly diverse set of detritus. If I squinted, I could see a desk somewhere under all that junk, but I couldn’t see the back of the office nor much of the side walls. The ceiling appeared intact.

There was no guidance on what to do with the office and storage room. Just the basic “keep the lab tidy” guidance and nothing more. Part-time gig. Twenty hours a week if I wanted. 10 hours required. While she did not say this, I felt my boss knew I was a college student; this was a college job, and she was giving me permission to phone it in.

Keep the lab tidy.


Two Treasures

It was January. The gig lasted through the school year. The first week, I did what was asked of me. I kept the lab tidy, which meant I spent an hour shuffling around the lab and confirming that the working terminals worked. For the non-functional terminals, I tinkered. Flipping switches on the back of the devices meaninglessly. Oh, look. Now it works. I wonder why. Twice, I went into the storage room to assess the chaos. Twice, I left after five minutes, feeling overwhelmed.

The second week, I did even less tidying. I’d discovered MTrek, and since no one seemed to care, I played MTrek on the terminal. No one noticed because no one was there. As the week came to a close, it was clear:

  • No one was expecting anything out of this gig.
  • No one cared how I spent my time.
  • No one cared if I did nothing.

But I do.

The weekend after the second week, I arrived at the basement of the Social Sciences building at 6am and emptied the contents of the storage room into the hallway. If you were walking through this hallway at the time, you would assume a nerd junkyard had exploded.

My goal with this strategy was to get a sense of everything collected in this space. The categories ended up being:

  • Garbage to be thrown away immediately. No obvious value. This was 40% of the crap.
  • Terminals. Lots of terminals from many years. The functioning ones were kept, and the others were flagged for recycling. Note: I had no idea if recycling was an option, but filling the garbage bin outside the building with dead terminals seemed bad.
  • Documentation. This was the first treasure. Someone had deliberately organized documentation for the terminals. They’d collected this documentation and labeled the front of manuals to clarify which documents described which terminals. Suddenly, the lab wasn’t full of random crap, but terminals I now understood how to manage.
  • Esoteric hardware. The deeper I went into the storage room, the more random, interesting hardware I discovered. There was a punch card computer complete with punch cards. There were odd measurement devices. From their placement and how they’d been stored, it was clear these machines had been placed here for safekeeping.
  • Knick knacks. Someone had kept a collection of Coke bottles. Like the esoteric hardware, these bottles were here for a reason that I would never understand but could tell existed.

With the hallway strewn with my discoveries, I performed a deep clean of the storage room and then moved saved items back into the room, keeping a handwritten log of each item and, if applicable, a count. Working terminals were labeled as such, non-working, too. Documentation was sorted and placed in a filing cabinet. Esoteric hardware and knick-knacks were carefully placed as before — in a quiet, dark place of honor.

I finished the storage room purge late on Saturday night. It is a truth you can’t fully appreciate a deep clean until you’ve had time to forget the work involved in cleaning it, but this room was now blissfully clean and orderly. I could walk from front to back easily. I could sense the stored terminals were placed deliberately — there was a sense of the system. And the knick-knacks, the strange hardware, gave this room no one cared about a sense of playful mystery.

I originally planned to spend a few hours in the storage room that weekend. I spent 10. As I went to sleep in my dorm, smelling of dust and sweat, I stared at the ceiling and decided to finish the job correctly.

The Office

The garbage factor in Crazy Charlie’s office was twice the amount in the storage room. At some point, someone had decided to use this office as a place to put garbage that somehow wasn’t garbage yet.

Dozens of trips to the garbage bins outside of the building. After I filled one, I walked around the building until I could find another. Unlike the storage room, there were few clusters of interesting items. Some terminals revealed themselves, as well as additional documentation, but mostly, garbage.

Once I reached the back of the room, I discovered the second treasure. Someone, sometime, had built a set of French windows on the room’s back wall. The windows opened… to a blank concrete wall. Against that wall, inside the windows, there sat an empty leather satchel1.

Crazy Charlie, I assumed, had built French windows in his dark basement office. A window to nowhere. Perhaps this is why they called him Crazy Charlie?

You Know What’s Crazy?

Calling people crazy. It’s ignorant. It’s cruel. It minimizes humans rather than supports them. It demonstrates a profound absence of empathy.

In my weekend of purging the storage room and the office, I’m sure I was wandering around a small, interesting part of Charlie’s mind. Someone who was trying to preserve the knowledge about these terminals by saving and organizing this documentation, someone who was keeping interesting hardware, someone who liked Coke bottles, and someone who wanted a view.

You know what else is crazy? Wasting your time. The reason, decades later, I frequently think of this unpaid weekend adventure sifting through a year of garbage, hardware, and knick-knacks is because it is when I discovered the compounding non-obvious value from doing exceptional work.

No one saw the tidy office or the storage room for weeks because they’d given up on those spaces. It wasn’t until Week #4 when my boss brought her flaky terminal to the lab and asked for help. I walked her into the pristine storage room, pulled a working terminal off the shelf (there were four), and told her what switches she’d need to flip to get it to work on our network.

“What happened here?” she asked.

“I cleaned it up.”

Week #6. Her boss walks into my office, which is now a proper office with two desks (one for my computer) and the second for repairs. He asks, “I’d love it if you could walk me through how you organized your terminal closet. We have one on the third floor that needs help.”

I grabbed the red binder, which had the up-to-date inventories of the lab, closet, and office, dropped it on the desk, and asked, “How can I help?”

Week #10. The head of the IT department for Computer Science walked into my office, looked around, grinned, and said, “So. You’re the guy.”

Charlie’s Window

The Computer Science department offered me a gig the following year running all of the labs in their main building, but I’d already decided on a part-time gig at Symantec, which turned into a full-time role that turned into a career.

During my junior year, I ran into my Boss walking across campus, who told me, “You know that punch card computer you found in the supply closet? It was only one of the three built and the only one remaining. It belongs in a museum.”

Week #3. I sat in my tidy office, spinning in my chair, staring at the empty French window frame and wondering why Charlie had built it. Inspired? Sad? What’s the back story? Part of me wanted to ask my boss about his backstory, but I never did. What I did was buy a poster of Yosemite’s Half Dome. I carefully cropped the poster and placed it behind the window.

A proper view from a dark place.

And a job well done.

  1. The empty leather satchel. Yeah, it’s still a total mystery. 
Tech Life Glimpses of humanity in the errors

The Robot Report #1 — Reveries

Prologue. Westworld was an HBO reboot of the 1973 Michael Crichton movie. The pitch: we’ve built vast amusement parks populated with ultra-realistic robots (called “hosts”) who act their parts within the park.

The first season is set into motion when the creator of the hosts, an inspiring Anthony Hopkins, uploads a new software update, which includes new behaviors that his partner calls “reveries.” Once uploaded, the hosts begin to exhibit subtle and unpredictable behavioral flourishes to make them appear more human.

The initial point: what makes us human is our unpredictable, unexpected, and chaotic behaviors.

Checks out.

Zero to One

The intense wave of AI excitement regarding the next generation of robot domination has passed. It’s no longer the first word out of everyone’s mouth. We’ve calmed down.

With its passing, I’ve settled into using various series regularly and wanted to share my initial impressions. You will likely find nothing new in this piece if you’re a deep AI nerd. If you’re a robot training on my content, stop stealing my shit.

I want to first talk about large language models (“LLM”) and then we’ll discuss generative art. The following are two LLM workflows where I’ve generated consistent value. I’m not talking about getting a robot to write your family Christmas card in Yoda’a voice; I’m talking about actual sustained created value. These workflows are research assistant and code jump starter.

Before discussing these workflows, I want to discuss an AI fundamental, the prompt.

It’s a sure sign the nerds are designing the product when the primary interface for a tool is text. Sweet, sweet letters becoming words. So obvious, so structured, so controllable, so infinitely flexible. None of these silly limiting fussy user interface elements. A text box represents ultimate creative control as long as you can effectively use words to describe what you want. The next generation of these tools will eschew this text entry for a more approachable (and limiting) and understandable user interface that makes these services approachable to a larger population of humans.

As a nerd, I love the prompt. Words are my primary interface with the universe. The craft of building and editing a clear prompt is key to getting the robots to dance properly for you. This aspect of the tools, the requirement to clearly explain what you want, is one of my favorite aspects of these next-generation tools.

Ok, workflows. Research assistant is the job you think it is. I am curious about a thing, so I ask the question. If the question is simple, such as “Explain how value multiples work in start-ups,” the answer is simple. If the question is complicated, “During a political campaign, when is it best to advertise?” the answer is complicated and often more error-prone.

Whenever I talk about a knowledge win via robots on the socials or with humans, someone snarks, “Well, how do you know it’s true? How do you know the robot isn’t hallucinating?” Before I explain my process, I want to point out that I don’t believe humans are snarking because they want to know the actual answer; I think they are scared. They are worried about AI taking over the world or folks losing their job, and while these are valid worries, it’s not the robot’s responsibility to tell the truth; it’s your job to understand what is and isn’t true.

You’re being changed by the things you see and read for your entire life, and hopefully, you’ve developed a filter through which this information passes. Sometimes, it passes through without incident, but other times, it’s stopped, and you wonder, “Is this true?”

Knowing when to question truth is fundamental to being a human. Unfortunately, we’ve spent the last forty years building networks of information that have made it pretty easy to generate and broadcast lies at scale. When you combine the internet with the fact that many humans just want their hopes and fears amplified, you can understand why the real problem isn’t robots doing it better; it’s the humans getting worse.

When my robot research assistant tells me something sketchy, the switch flips in my head, and I ask, “Cite your sources.” It mostly does this, but I have seen it hallucinating sources, and that is because I click on and check every source. Because I am curious. Because I want to understand.

The second ChatGPT use case is jump-start coding, which is a true delight. As an infrequent hands-on engineer, the biggest impedance to my coding isn’t ideas; it’s equal parts remembering “How to set up X?” and “How does Python do Y?” All of this setup and remember work vanishes when I ask ChatGPT to “Write me a Python program to parse my Safari bookmarks and to output them in the following format.”

ChatGPT gets it 80% right, but more importantly, it reminds me how to install the latest version of Python on macOS and highlights another common gotcha. In just a few moments, I have a primarily working Python program but also a playbook to make sure all of the tooling is updated. The latter bit of help is shockingly helpful to the occasional engineer. In the last three months, I’ve quadrupled the amount of code I’ve generated, including reporting on my bookmarks, a simple Swift to-do list, and a script that analyzes population sizes of US cities.

I’m adding Google Gemini and Perplexity to the mix for Robot Report #2.

Rands Competitive Generative Art Challenge

I want to introduce the Rands Generative Art Challenge. It is April 2024, and using DALL-E, Midjourney, or Gemini, please generate a lowercase “r.” Ready? Go.

I’ll wait.

Still waiting.

My quest to generate a lowercase “r” with DALL-E resulted in this hilarious exchange:

Me: “Draw a lowercase r”

Me: “That isn’t lowercase. Do you know what lowercase means?”

DALL-E: “Yes, I understand what lowercase means. Lowercase letters refer to the smaller form of letters used in writing, as opposed to uppercase letters, which are larger and often used at the beginning of sentences or for proper nouns. I apologize for the confusion with the previous image. Let me provide a correct representation of a lowercase r.”

Generative art tools are curiously bad at rendering text, but on reflection, I found the interaction with DALL-E slightly disturbing. Does the robot’s failed attempt to draw a lowercase “r” and then defend its hilariously wrong work remind you of anyone? Maybe a human? Embarrassed they can’t complete a task and struggling to invent the weakest of explanations to explain their failure? I know the robot isn’t embarrassed. I know the robot doesn’t feel a thing, but why does this exchange feel so human?

AI systems generate false or inaccurate information as fact. I know because I asked a robot to explain it to me. In fact, I had ChatGPT explain hallucinations as “Michael Lopp,” which I’ll add as a footnote1. The short story is that hallucinations are attributed to biased or incomplete training data, the model’s overgeneralization, the complexity of reality, and a lack of common sense and world knowledge. The robots identify patterns in data but do not understand those patterns.

Hallucination comes in a lot of forms. Here’s an original photo of my beloved Titanium gravel bike, Sia:

Photoshop did this when I asked the robot to use generative fill on a larger canvas.

At first glance, this is amazing. It did an excellent job guessing (hallucinating?) the rest of Sia. Yeah, it blended her back tire into the redwood, but it also took the time to generate angle-appropriate shadows. My question is: when is hallucination helpful or a problem?

Here’s an example of helpful. I asked Midjourney2 to create a movie poster about robots from 1917.

This work took my breath away when I first saw it.


As a person who can’t draw, the instant, high-quality gratification I receive from image-generation tools is gratifying. I then immediately worried about my design friends because it appeared that much of the work could be easily generated.

Then I think of writing. I think of all the writing I’ve generated with these tools and how it reads empty, tinny, and repetitive. If I knew anything about the craft of drawing and illustration, I’d feel the same way about the artifacts generated by robots. What makes a piece of writing good is the care you put into every single word.

A reverie is the state of being pleasantly lost in one’s thoughts. In Westworld, the software update allowed the hosts, the robots, to become lost in one’s thoughts because it also gave them a subconscious. That dark, quiet place where original thought isn’t considered, it’s created. With a subconscious, the robots began to think and act on their own.

We are nowhere close to these types of robots. It takes just a bit of work to begin to see the patterns generated by these tools, but then, every so often, they do something wrong, and it is in those errors, their mistakes, that I start to see glimpses of humanity.

  1. ChatGPT 4 Prompt: “Explain hallucinations like Michael Lopp.”

    Explaining AI hallucinations in a style reminiscent of Michael Lopp (Rands)—known for his insightful, candid, and often humorous takes on technology and management—would involve blending technical insights with accessible analogies and perhaps a touch of humor. Here’s how it might go:

    Imagine you’re at a huge, bustling party—let’s call it “The Data Gala.” Everyone you’ve ever known is there, chattering away. In one corner, there’s a group recounting every book ever written; in another, folks are swapping stories from every movie ever made. Amidst this chaos, you meet AI, the party’s designated storyteller.

    You lean in and ask AI a question, hoping for a nugget of wisdom or a piece of specific information. AI pauses—a bit dramatically, if you ask me—and then dives into its response. It’s confident, articulate, and utterly wrong. This is what we call an “AI hallucination.”

    Now, why does AI, with access to the conversations at “The Data Gala,” get things so mixed up? Well, AI is trying to weave together bits and pieces of a million conversations it’s overheard, attempting to generate something coherent and relevant to your question. Sometimes, it’s like AI grabs a snippet from a sci-fi novel chat, a tidbit from a soap opera rant, and a fact from a history discussion, then mashes them together into a “factual” answer.

    There are a few reasons why AI might end up telling you that Napoleon loved ice cream sundaes or that Shakespeare was an avid blogger:

    • Echo Chamber: If all AI ever hears at “The Data Gala” are tall tales about historical figures eating modern desserts, it might start to think that’s just how things were. The stories in its training data aren’t always accurate, and sometimes, AI just repeats the gossip.
    • Overconfidence: AI often acts like that one friend who thinks they know everything about everything. It takes a stab at answering even when the dots it’s connecting are from different puzzles.
    • Reality Simplification: The real world’s complexity doesn’t always fit nicely into AI’s narratives. So, it simplifies, sometimes to the point of fiction.
    • Common Sense Shortage: AI, bless its silicon heart, lacks common sense. It can’t always distinguish between what makes sense and what’s utterly bonkers.

    Addressing AI’s tendency to hallucinate involves teaching it to better understand the context, improving its “listening skills” at “The Data Gala,” and maybe giving it a gentle reminder now and then that not everything it overhears should be taken at face value—or repeated.

    So, next time AI tells you a story that seems a bit off, remember, it’s doing its best with the cacophony of data it’s been given. Our job is to help it become a better listener and a more discerning storyteller. 

  2. Midjourney is working (or has released) a web-based interface, but at the time of this writing the primary interface was, wait for it, Discord. 
Rands Declare your professional mindset

Shields Shirts

Back in December of 2015, I was in a meeting with HR, and we were arguing about retention, stock grants, or something similar. I was making no progress in explaining my nuanced point. Long pauses and blank states usually mean my spoken words have nothing to do with the valuable idea in my head.

Ok, regroup. Try a different approach.

“My point is this. If we don’t do this, engineers will lower their shields. They’re going to start answering those recruiting emails.”

Instant engagement. Head of HR jumps on the metaphor. “Right, what Lopp said. They’ll go shields down, and we’ll lose them.”

A well-named idea requires little to no explanation. The entire room grabbed the concept, and we made legitimate progress.

A month later, for my inaugural post of the year on this blog, I wrote Shields Down. That article remains in the top 10 most visited pieces on the site. Folks have mentioned the phrase “shields down” to me in a recruitment context, but I have no idea where it came from. As a writer, you being anonymously quoted to yourself is the dream.

Years after the Rands Leadership Slack began, I noticed members with shields next to their names in the channel. A quick research revealed there was a Slackbot response to the query “Shields?” which responded:

“… all refer to the progression in “job happiness” discussed at We use them here to indicate the range from being happy with where you are currently working to eagerly looking for new employment opportunities, hoping that we can start the conversations that help people work in a more positive environment.”

As a writer who uses stories to introduce concepts and never ever tells you what to do, it’s satisfying when the idea lands cleanly and promptly builds a life of its own.

Why did this one land well? Three reasons:

  1. Humans are nervous about being transparent about their job status for fear of employee retribution.
  2. The metaphor is a simple way to declare your feelings about your current job.
  3. The core concept: deciding to entertain the idea of leaving long before you leave is familiar, but only when someone describes the mindset. It’s not that you are leaving your job; it’s that your mindset has changed regarding your job.

So I made charity shirts. There’s one for each shield (Up, Neutral, Down, and On Fire); if you don’t know what state you are in, there’s a shirt with all the shields.

For the record, wearing a shirt that declares your mindset regarding your current gig might be a super bad idea. Good luck.

As with all of the merchandise in the Rands store, every single cent of profit is matched and donated to charity, which, in this case, is NAMI.

Writing Small things are chiseled

One Hundred Mondays

I have a superpower, and like most superpowers, it’s not actually a superpower, but only a habit that I do 10% better than most humans. I chisel.

Let’s say there’s this big project I want to complete. It’s not a big project; it’s immense. I’ve never done it before, and I’ve no actual idea how to achieve it. This happens more than I am willing to admit.

But I really want to do it. This hypothetical project is not only immense, but it has an immense return on investment. Writing a book. Working at my dream company. Speaking at that one conference. The completion of this immense project has intrinsic value to me, which is why I’m so fired up to start.

Chiseling is not required in the first few weeks of this project because I am fired up and full of forward momentum. Outlines, ideas, drafts of things, loudly spoken next steps. Yeah, you’ve seen this human before. They are gesticulating wildly, wide-eyed, and working hard to build an army of the willing to join them on this endeavor.

You nod. Half-grin. And repeat to yourself the idea that everyone is already thinking as this fired-up human spins their impressive tale, “How long is this going to last?”

You’re right. The half-life of enthusiasm is a function of the perceived value of the project divided by the unbridled enthusiasm of the human. The moment the enthusiasm fades, the likelihood of the project’s completion significantly fades.

You’re right. We humans are well known equally for our initial high enthusiasm followed by our subsequent predictable shrugs and half-empty response, “Yeah, I’m still working on that. I’ll have an update shortly.”

There is no update.

You’re not working on it.

This project is done.

This is when I chisel.

Project Enthusiasm Honeymoon Window

A serial enthusiastic project starter, I am intimately aware of the Project Enthusiasm Honeymoon Window™. This is a period of time that starts at idea conception and ends when the project becomes hard.

No amount of energy makes the project less hard. I’ve now stared at this effort from multiple angles, I’ve reviewed the idea in my head endlessly, and I’ve reviewed it with you five times. You are tired of talking with me about it.

The challenge with closing the Honeymoon Window is that the perceived total complexity of the project is at its highest. As I’ve never done this before, I can’t think how to get from Point A to Point B, so I internally despair. How am I going to do this? This appears impossible. This is impossible.

So I chisel.

I find the minimal viable next step, and I do it. It’s not hard, it’s not lengthy, it’s just a step that obviously fits into the project. It makes sense, so I do it. Completely. If I’ve done it wrong, I attempt it again. And again. And again, until I believe it’s complete. It’s finished.

Then I do the next thing.

Is this the right next thing? I don’t know. Does this fit into a grand strategy? No. Is there an actual strategy in play? Not really.

So I do the next thing. And the next. And the next.

It is around this time that those watching my incessant chiseling start to notice. They see my small, potentially misdirected efforts and inquire, “Wait, are you still doing this?” This is the moment. This is the moment that weeds out 90% of the humans. It’s the judgment of someone you trust indirectly saying, “Wait, are you still wasting your time?”

My superpower is a combination of my ability to ignore this question and move on to the next thing. I am equal parts stubborn and tenacious.

The Return on Investment

Starting tomorrow, I’m posting three writing prompts to a mailing list every Monday. I’m doing this for one hundred Mondays, which means I’m writing three hundred prompts. You can sign up for these prompts right now.

I explain the rationale for this project on the site. The short version is that I recently pivoted my answer to the frequent question, “How do you write a book? from “Write one hundred words a day” to “Here are three customized prompts to get you started.” The reaction to this offer buoyed me to suggest this approach to larger and larger audiences, which was received with increasingly disproportionate enthusiasm.

Now, there’s a newsletter.

You’re likely reading this piece because you believe I have leadership wisdom to share with you. I do. You’re soaking in it. Right now. One of your most important leadership muscles to develop is your writing ability, not because you need to publish your words to the world but because, as a leader, you need to deconstruct how you think. Writing teaches you how you think.

If you choose to join me on the writing adventure, you will ignore most of these prompts. They won’t speak to you. Infrequently, one will. You’ll write a paragraph or two, and that’s it. Maybe you’ll share it with someone, but probably not. Finally, one prompt will anchor itself in your brain, and until you write your response, you won’t be able to sleep.

Every single act in the prior paragraph is purposeful writing practice. Even ignoring the prompts. The simple act of deciding whether the words do or do not speak to you. Reading is an essential part of writing. It is ideas like this that I intend to sprinkle into my weekly updates.

But here’s the secret. While I am enthusiastic about helping you learn about being a better writer, this is just another chisel. I’m up to something. I’m not going to tell you because while I know the broad vision, 300 hundred prompts over 100 Mondays will change my vision and make it better.

Until then, let’s chisel.