Management I am eagerly listening

Poets and Police

I was doing this talk, which I’d done dozens of times before. Good, well-practiced deck. I was speaking to CTO-types (current and aspirational) as a favor to a friend. This was a monthly morning coffee chat for this crew, and they invited folks like me to speak. It was on a weekday morning near Slack in downtown San Francisco.

No problem.

Practiced talk, small group, low stakes. I was editing the title slide to update the location and name of the event. No practice necessary; again, I’d done it before.

The piece was based on this piece called Stables and Volatiles. The brief pitch. Stables are those who happily work with direction and appreciate that there appears to be a plan and the calm predictability of a well-defined schedule. Volatiles are the opposite. Read the piece; I like it.

Finished quickly. I was told this was more about a discussion than a presentation. Fine with me. Q&A tells me precisely how well the deck landed, and I’d done this talk enough to believe the Q&A would be rich. Healthy banter. It started that way. Questions about my first stint at Apple and whether well-known people were Stables or Volatiles, but then Leo in the Back Row lost his shit.

“This is bullshit. It’s a false dichotomy.” Leo, the CTO in the Back Row, was pissed about my presentation. For those without ChatGPT at the ready, a false dichotomy is “the fallacy of presenting only two choices, outcomes, or sides to an argument as the only possibilities, when more are available.”

After some back and forth, I told Leo, the CTO in the Back Row, that, like most of my writing, I liked to describe humans in stark, clever ways. This often took the form of a “THIS or THAT” black-and-white structure, but I was 100% clear that the answer to humans was a hard-to-define grey area. My job was to get you to think, not to define every possible configuration of human behavior.

I’d delivered that answer before, and it worked, but Leo, the CTO in the Back Row, was having none of it. He was still angry and — now, I am guessing — because I’d wasted his time. He was promised a structured model, and I delivered confusing poetry.

Leo, the CTO in the Back Row, was the Police. And the Police don’t like poetry.

Guess What, Leo

I have another false dichotomy for you: the Poets and Police.

Poets:

  • Finish things. Usually.
  • Use rich language to describe abstract situations.
  • Believe well-formed, highly descriptive ideas make the world an understandable place.
  • Are fine with ambiguity because they understand it’s all just shades of grey.
  • Fall in love with ideas. They’ll fall in love with a single choice word.
  • Like to use the word “feel” because feelings are distilled intuition expressing themselves as inspiration.
  • Love thoughtful compliments.

Police:

  • Finish things. Wow, they finish things.
  • Crave well-defined structure and rules.
  • Believe rules make the world an understandable and measurable place.
  • Hate ambiguity because it provides no direction.
  • Deeply enjoy both debating and enforcing those rules. They believe this is how you make future measurable progress.
  • Will debate a single word that is out of place until clarity is achieved. (Poets do this, too.)
  • Are excellent at measuring anything with metrics. Are unlikely to believe unmeasurable truth.
  • Never use the word “feel” because feelings are irrelevant to getting the job done.
  • Appreciated well-defined accomplishment.

Two things.

First, as a Poet, I know I am describing the Police from my perspective. Police will profoundly disagree with many of the attributes I describe. I am eagerly listening.

Second, yes, this article is similar to my much earlier piece on Organics and Mechanics, but I feel it’s stronger writing.

Third, I am making up a third thing. To annoy the Police who were keeping count because that is how we Poets roll.

Success is Both

As a self-declared Poet, I can confidently describe the Police because it is a job requirement that develops strong working relationships with these essential humans. I need them because the Police do the challenging work of keeping the trains on time. This isn’t simply holding conductors to a schedule but also maintaining the trains, taking care of the track, and ensuring we have a qualified staff of humans to do all this work. Oh, and how about a budget? How are we going to afford all of this? Someone needs to build a credible business plan for this train company so we can afford to keep the trains on time.

As a self-declared Poet, we also need to understand the aspirational goals of this train company. I also understand the importance of consistently sharing this vision with everyone. I know we need to listen because we need to understand how the company feels. I’m adept at organizing teams of humans with differing ideas and skills. It’s an endless puzzle that I enjoy attempting to solve. I love celebrating our victories. I feel our failures deeply, but I know that with the Police, we will learn from these failures.

Listen. Leo. The CTO. You there — in the back row. I get why you’re mad. See, while we differ in how we view the work, we are the same regarding what’s essential. We want the team to succeed, and we want them to advance. I’ve learned some of my favorite moves watching you work, Leo. I’ll work hard to try not to waste your time with too much poetry if you work hard to understand that poetry is part of how we describe and achieve the impossible.

Management Leaders tell you where you are going

Words on Founder Mode

I’ve worked at three successful start-ups and one failure. I’ve also worked at post-IPO successes such as Borland, Netscape, and Apple, which means I’ve seen a lot of different founders who, if you measure success financially, were quite successful.

My backstory aside and with deep respect, most founders fail. You’ve heard of the stories of sucessful founder because they’ve become famous (or infamous). However, the majority of start-ups fail. No one tells and retells the stories of these companies because they never launch. No one became rich or famous. It is their defining characteristic. In his recent essay, Paul Graham talks about the successful founders. However, it’s not “Founder Mode,” it’s “Successful Founder Mode.” Lumping all Founders together would mean we should — statistically and more descriptively — call this “Failing Founder Mode,” which is neither clever nor inspirational.

As a person deeply in love with naming things, I like the framing of Founder and Manager Mode because it’s clever and instantly useful. If you’ve been reading me over the years, you’ve noted I’ve begun to detest the term manager for some of the reasons Graham highlights: unfamiliar with the details, management at a distance, lousy hiring, and siloed decision-making. I’ve gravitated towards the word leader both because I want to make it clear any motivated human can execute the skills of a good manager — leadership comes from everywhere — and, more importantly, I believe managers tell you where you are. Leaders tell you where you are going. It’s a philosophy thing.

It’s less clever and symmetric, but I would define these two modes as: “Founder Mode” and “Scale Mode” because one of the many things you need to do as a successful start-up is scale. Landing managers is one of those tactics, but it’s just one essential investment you make as you scale. Other tactics are equally important. Focusing on managers as the thing that makes you a larger successful company gets you… more managers. Gross.

My observation from three successful start-ups, three post-IPO starts where the founders still ran the show, and one failure: the founders were all eccentric humans. In a hypothetical room full of humans, which included these founders, you would eventually notice their behavior or conversational pattern. You would single them out in your head because the hair on the back of your neck stood up and wonder, “What… is going on there?” My one start-up failure? The founders? By the book, MBA types. Standard vanilla leadership. Trust me, you’ve never heard of them or the start-up.

Graham hints at some of the attributes of Founder Mode but mostly says it’s not a well-defined. It is. Founder Mode is the culture of a company and a culture is defined by the character of the founders. Here are the values I’ve discovered over and over again working with these humans:

  • The whole team is involved in the details. Anyone can argue about the product because everyone works to be a product expert.
  • Everyone does the work. The stratification of responsibility is a red flag not just in rapidly growing team, but any company. Everyone files bugs because everyone uses the product. Yes, there is job specialization, but there is also a belief that we are equally accountable for the product.
  • An organization chart doesn’t tell you who can speak with whom; it tells you who is accountable for what. It’s a map. Not a power structure.

And finally, hire leaders, not managers.

Management I know you can do this

Late Again

Awkward.

Seven of us now. Sitting around the table. Five minutes since the start of the meeting. We’ve used up our chit-chat allowance and wonder if you will show.

In the scheme of things relevant to a company’s success, showing up late to a meeting is not the end of the world. When it happens a lot, when it’s always the same person and that human is a leader, I have a severe tick when it comes to this human. Here is my chain of increasingly frustrated and snarky reasoning:

Here’s my inner dialog:

  • Do you know that you are late?
  • Do you understand why you are consistently late?
  • Do you understand the total cost of this lateness? It’s the room full of humans waiting for you, plus the repeat of the information they’ll need to hear when you arrive.
  • Do you understand the subjectively immeasurable credibility hit you take each time this happens?

With that out of my system, let’s get to the heart of my concern. If you understand the problem and further understand the cost of this behavior, why is it continually happening?

Many meetings are predictable. Look at your calendar first thing in the morning. Is this an achievable set of work? No double bookings. Good. Do you see where you are likely going to be time-constrained? You’ve got a meeting you must attend that always runs long — ok, move the meeting after it. You’ve got one meeting on the opposite side of the campus? Ok, build in travel time and adjust the calendar as necessary.

Some meetings are not. Day has started, and the universe is throwing you curveballs. It happens. A reliably stable meeting ran long. Did you tell the next meeting to move on without you? Wait, you need to be there? Ok, move to the next available slot — quickly and alert attendees. Loudly. An important meeting landed in the middle of your everything? Ok, what needs to shift? Shift it. Loudly and promptly.

But it just keeps happening. Five minutes late to everything. Too much on the calendar, too many curveballs. You are in a constant state of stress, and this is where I will stop us for a second to state the obvious.

There are leadership types who believe this level of busy is correct. They believe a key part of the job is this level of stress and scurrying hither and fro. Some humans find a modicum of pride in this state. You know, the meeting doesn’t start… without me.

To all the serial show-up-laters who meet this criteria,

You are a failing leader.

Strong words. You thought your consistent five minutes late was a quirk. A minor operational error, but in my book, it’s a strategic flaw because you are not identifying and fixing the fundamentals. There are so many minutes in a day, there are only so many meetings you can personally attend, and these are knowable quantities. Failure to address these fundamentals means you are not growing as a leader.

Who can go instead of you? No one? Why? What is so essential about you? Experience? Really? How does one gain experience? You give others the same opportunity that someone else gave you. You ask them, “Can you attend this review?” They look panicked, so you tell them, “Here is the essential context you need. These three things will likely happen; here’s what I do when they occur.” They look less panicked, and you finish with, “I know you can do this.” They grin. Nervously.

Delegation. This is one solution. There are many more, but I’m not listing them because it is your job as a leader to identify the problem that ails your team and be accountable for properly and permanently fixing them.

So you can show up for a meeting on time.

Tech Life Something to say

Rands Useful App Awards 2024

You’re not going with that title, are you?

I might.

There are no actual trophies or awards.

Correct. There are three Rands Useful App Awards this year, but before I award them, I will define the criteria I used to select these products.

  1. The application has substance. This is not a one-shot utility that does one thing well; the designers and developers have built an application, a service with deliberate weight. There is a richness in functionality and depth. This is because…
  2. The application effortlessly provides domain expertise. The humans who built this application care deeply about the domain for which this product was built. After you know the application, you think, “Gosh, I wish it does X.” It does X. It does Y. And it doesn’t do Z because you don’t need Z — what you need is Z++, and when you discover that, the application becomes indispensable because…
  3. Each of these products integrates many different data sources. This is not obvious to the end user because the designers and engineers have to great lengths to make this integration seamless and allow users to move between views of data effortlessly.
  4. A by-product of deep domain expertise is quality. A motivation to deliver a high-quality application or service can be created in many ways, but well-defined motivation comes from experts. The humans who understand the problem space and these humans will go the extra mile to ensure the product is built correctly and works as expected. They also…
  5. Obsess about every detail. Obsess? Too strong a word. Design, for design’s sake, makes the product approachable and perhaps more usable. Design motivated by deep domain knowledge gives each decision defensible meaning. When users encounter these design decisions, they think, “Uh, this product just read my mind. That’s amazing.”
  6. Finally, the product has voice. It has an opinion about what it is. This is the hardest attribute to define (and build) but the easiest to observe. When you fire up the product on your phone or web browser, the product has something to say.

With these attributes in mind, I give you the 2024 Rands Useful App Awards.

Really? Sticking with this whole bit? Ok.

Windy For the past two years, during the rainy season, I’ve been sending out an update to a Messages group composed of my Dad’s group. The point: give folks a heads regarding rain for the following week. A hyper-local forecast based on a handful of data sources. At the core of that forecast is Windy.

Weather nerds build Windy. Click on that link and observe the vast array of weather-related resources at your fingertips. Unlike WatchDuty, you’re on your own in terms of getting answers to your weather-related questions, but the answers are there in terms of wind, rain, temperature, cloud cover, waves, thunderstorms, air quality… and those are just the layers I can see in my current configuration. There are dozens more.

I augment my local forecasts with other sources besides Windy (Weather Underground is particularly accurate in my region), but Windy is where I start because Windy has a weather answer for everything.

WatchDuty WatchDuty provides a rich real-time map of nearby wildfires and firefighting efforts. This is probably less immediately interesting to you if you live somewhere with less combustible dry fuel, but we’re all going to care about this more over the next couple of decades.

Pre-WatchDuty, when a fire flares up nearby, you’ll spend hours discovering the following:

  • Major news networks are slow and lack detail,
  • Social networks are full of noise, and,
  • There is a dizzying amount of structured data regarding any fire in the form of maps, satellite imagery, weather forecasts, and other tools.

WatchDuty meets all the Rands Criteria™. I check the fire situation in California a couple of times a week, and — at a glance — I can quickly see hot spots in the state. It quickly conveys location, size, containment, and recent news updates. Also, seeing the real-time status of all aircraft involved in the fire is fascinating. These updates are the core of WatchDuty and are supplied by a curated set of reporters who are sifting through all the noises to find the best and freshest signal.

Flighty A recent addition to a familiar problem. Everything works when you’re flying except when it doesn’t. Complex systems fail complexly, and when they do, the difference between spending a night at a crap hotel in O’Hare and making the next flight to Newark is timely information.

Flighty will not rebook your flight to Newark, but Flighty will give you every opportunity to become aware of emerging complex disasters. The application crushes all the criteria listed above, but the outstanding aspect is, “These designers and developers fly. They fly a lot.” This is best demonstrated by the Flighy Passport, which clearly shows all your past flights, total number of miles, flight days, airports, and airlines. How do they know this? Because, of course, you put all your itineraries in TripIt, right? Of course you do.

The immense amount of pertinent travel information built into Flighty is staggering, but if you haven’t obsessively kept track of your travel for years, just put in the flight number of your next flight and see what Flighty can teach you.

A Function of Focus

I originally called these the Rands Indie Awards, and while each of the applications is some form of independent, I think the larger observation is they aren’t a product of a large company. Flighty looks like it’s three humans. Windy is owned by a billionaire but appears to be less than twenty developers. WatchDuty is a not-for-profit and appears to have less than twenty full-time humans.

Why is the size of the team important? The size of the team directly correlates to how well they can focus. For each of these products, it’s clear they picked a domain or a problem space they deeply understood, and with a small set of hands, they focused on building a substantive, high-quality product with voice.

Management Listen to the room

Seven Conversation Hacks

  1. Say their name if you think they aren’t listening. In a meeting with five or more humans, it’s ok if someone checks out of part of the conversation. Not every topic is of equal interest to all humans. When you know the conversation is steering back to a human who isn’t listening, say their name.
  2. Repeat the hard part when you don’t understand. Or, repeat the last thing they said and add a question mark. Ask questions if you don’t understand.
  3. Pause if you need more time to think. Let the conversation breathe.
  4. Move your line of sight below theirs. Hunch over a bit. This changes the sense of who is in charge of the conversation. I learned this subtle move from a fascinating book about improv. We, as humans, react to the relative position of another’s gaze. Higher, they are driving. Lower, they are receiving.
  5. Look them in the eye if you think your point isn’t landing. Or if it’s important to them. Repeat the point. Once. A variant of saying their name, except in this scenario, you have their attention, but it’s unclear if they understand the point.
  6. Stop talking. Ask them what they think. Or slow your cadence. I get on a roll often, conversationally, and what feels like a delicious conversational tale to me is rambling. When the room starts to tell me this, I stop. I stop for five seconds. In five seconds, you can effectively reset the tempo of a conversation. Possibly my favorite conversational move. s
  7. Listen to the room when you are done to see and hear what they heard. Does the conversation continue immediately on the same or related topic? Excellent. Is there a painful, long silence where it’s clear you didn’t deliver your message? Keep trying.
Tech Life I typed "Thank you"?

The Robot Report #2 — Her

There are two classes of songs I listen to when writing. Words and no words. Word songs are used when I the writing does not require flow — deep thought. The problem with words is they get in your head, bump around, and start creating more words. At a time when I am attempting to focus on a specific set of words, word songs are not the solution.

Nonword songs have no words and are deployed to encourage the correct words. The lack of work but the presence of melodic feeling (plus one to three cups of coffee) is the perfect partner for new word writing.

Since the release of Spike Jones Her in 2013, the perfect reigning champion of non-word songs continues to be Dimensions by Arcade Fire or Owen Pallett> — it’s unclear who wrote it1.

“Her” was back in the news recently with the report that Scarlett Johansson was super pissed that OpenAI allegedly trained on her voice for the 4o2 release of their model, which included scary impressive voice interactions. Since these allegations were revealed, OpenAI has been diligently releasing information that proves they trained on a totally different person… who kind’a sounds like Scarlett Johansson. It was reported that one of Sam Altman’s favorite movies was Her, where Johansson uses her trademark semi-gravely voice to give life to Joaquin Phoenix’s AI companion.

This important kerfuffle regarding protecting actor’s persona is not my point. My point is: Her is a profoundly sad piece of cinema (that I deeply love). It beautifully documents a not-too-distant future where we no longer ignore each other with our faces jammed into our phones; we’ve been liberated and now freely walk the world talking to our phones… ignoring each other.

The Proper Interface

Yesterday, I was working on a future piece regarding my beliefs regarding team size and organization depth. I have shared this information for years: the ideal team size is 7 +/- 3, and the ideal organization depth is 5 — not including the CEO. I concluded this thought with the closing point, “And these constraints fan out nicely; you can build quite large organization following these guidelines.”

But I didn’t do the actual math. I estimated. With the ChatGPT omnipresent on my desktop, I described the above constraints and asked, “How big of an organization can I build?” There were typos and colloquialisms in my question, and ChatGPT answered it instantly and correctly. When it was done, I typed, “Thank you.”

I typed “Thank you”?

Who was I thanking?

There is a spectrum of how humans think about large language models (“LLMS”). On one side, some declare, “They are superhuman-level autocomplete engines,” on the other end, we have those who believe, “They are partially sentient future destroyers of the world.” As is custom, the answer is someone in the middle.

Wherever you lie on that spectrum, you are skipping the most important innovation of these LLMs: the conversation.

Return to the example above and explain how I would complete the same task in Google. I wouldn’t. I’d start scribbling the math on a piece of paper and figure out the potential size of these organizations. Maybe if I were stuck, I would type “common math equations regarding measuring organization sizes” and stare dumbly at a wall of ads and possibly valuable equations.

Read that last sentence again. I wrote, “I would type,” not “I would ask.” Typing keywords versus asking a question. It’s an entirely different mode of thinking for me. If I’m typing something in a search engine, I’m trying to figure out the keywords that give me a page that might answer my question. If I’m asking a question, I’m using my natural and familiar language to describe the problem I am trying to solve or the question I am attempting to answer.

Here’s my prompt:

“If I have a rule that teams can only be seven to ten in size and there can only give five levels of management, what is my maximum organziation [sic] size?”

Google’s response included:

  • Did you mean? Where it corrected my typo.
  • A bunch of ads for Microsoft Teams.
  • Then, there is a link to an article from a VP at Stripe who describes how to size and assess teams.

ChatGPT’s response answered the question and showed its work so I could verify the math.

There’s more. Because I was in a chat mindset, I asked for follow-ups. What were the organizational size caps of with different constraints? What if I added one more layer? And when I was done with my queries, I typed, “Thank you.”

That final thank you feels like a throwaway conversational flourish until you think like a robot. It didn’t parse that as thanks; it parsed it as “This human believes my answer was correct.” This is essential data to help future queries.

Comparing Google and ChatGPT is not a fair comparison. It’s comparing a search engine to a large language model. Two vastly different stacks of technology. Guess what, it doesn’t matter. Your average human is searching for the lowest possible friction means to get the highest possible quality answer. I’ve been twisting my brain into mental knots for decades, trying to figure out the proper set of keywords and searching for the proper web page that might answer my questions.

ChatGPT answers my question because I ask my question like I’m talking to a human.

Yes, ChatGPT is aggressive and confident even when it’s impressively wrong. Guess what? That makes it more human than robot.

Profoundly Sad

When the iPhone was first announced, a recurring debate amongst my friends was, “The touch screen makes or breaks this device.” See, we’d been promised touch screens for years before the iPhone — they existed, but every single screen before the iPhone had discernible lag. From the moment you began to touch and drag on the screens, there was a bit of distracting lag. This brief moment of dissatisfaction ruined the magic. This is another touch screen that reminds me of its technology.

Technology is magic when it meets our expectations and reflects our reality. When you touch and drag a screen, you expect it to react precisely as when you perform the same actions in the real world.

The magic of the ChatGPT 4o voice interaction demo wasn’t the bajillions of engineering hours that went into powering the models that allowed the robot to respond, it’s that she responded how you expected. She responded instantly. She understood your half-words. She stopped when you interrupted. She laughed at your dumb jokes. It was magic because it met our expectations and reflected our reality. This is how I expect a conversation to work.

It is still just a tool.

Her is a profoundly sad movie because it intimately describes how these tools we love have driven us apart. Scene after scene shows vast crowds walking public areas in a low-grade conversational murmur. All the humans are pleasantly talking… to their devices. This was not the primary intent of the movie; it’s a story of the search for love, but it also describes the need we humans have to connect.

Like social media before it, I remain steadfastly exuberant about the potential for this next generation of technology to help. Still, I’m now properly educated that the potentially unimaginable consequences could outweigh the benefits.

It’s a tool. It’s not a human. It’s not Her.


  1. True story. For years and years, the soundtrack for this album was not available. A handful of the jams were available, but the album, the bulk of the music, was strangely not released. Someone was fighting with someone about something. The original score was finally released in 2021, which means from 2013 until 2021, I searched the dark corners of the internet for this song. This contributed to its uniqueness. 
  2. Version numbers are arbitrary, and I’m confident they didn’t call this ChatGPT 5 because they wanted to quench the “they’re moving too fast” vibe. 
Tech Life No one cared if I did nothing

Crazy Charlie’s Window

Second year of UCSC. Decades ago. A predilection for Domino’s Pizza and a fundamental misunderstanding of how credit cards worked left me in a financial hole. I needed a job, any job, and the School of Social Sciences offered a job aligned with my computer science degree, lab technician.

A solid interview with a prompt job offer. This is going to be easy.

No, this is going to be boring.

The lab technician role was among the lowest rungs on the Social Science ladder. My boss was a tenured professor who was somehow protecting grant money by hiring a minimum-wage lab technician. Her description of the role during the interview versus what she described on my first day had little intersection.

“Make sure the lab is tidy. Make sure the terminals work. And whatever else you want to do. You have a storage room where we store old terminals and other random hardware. You have an office. It used to be Crazy Charlie’s. It’s a mess.”

The lab was a large basement full of rectangular tables, each with two terminals on top. I never saw a human use the lab in my first two weeks on the job. The pre-PC terminals were heavy and ugly. A quick assessment revealed that 50% of them refused to turn on.

And that was the good news.

The storage room was long, perhaps twenty-five feet deep, with ceiling-high racks on both sides. These racks were jam-packed with discarded terminals, cardboard boxes full of who knows what, random bits and bobs of technology, and then just random crap. I could get halfway to the back of the room before being stopped by overwhelming clutter. My assessment: random humans had been leaving and piling random crap in this room for a decade.

And that wasn’t the worst news.

My “office” made the storage room look tidy. The prior occupant, described to me by multiple people as “Crazy Charlie,” had apparently been a hoarder, and this office, roughly half the size of the computer storage room, featured a stunningly diverse set of detritus. If I squinted, I could see a desk somewhere under all that junk, but I couldn’t see the back of the office nor much of the side walls. The ceiling appeared intact.

There was no guidance on what to do with the office and storage room. Just the basic “keep the lab tidy” guidance and nothing more. Part-time gig. Twenty hours a week if I wanted. 10 hours required. While she did not say this, I felt my boss knew I was a college student; this was a college job, and she was giving me permission to phone it in.

Keep the lab tidy.

Boring.

Two Treasures

It was January. The gig lasted through the school year. The first week, I did what was asked of me. I kept the lab tidy, which meant I spent an hour shuffling around the lab and confirming that the working terminals worked. For the non-functional terminals, I tinkered. Flipping switches on the back of the devices meaninglessly. Oh, look. Now it works. I wonder why. Twice, I went into the storage room to assess the chaos. Twice, I left after five minutes, feeling overwhelmed.

The second week, I did even less tidying. I’d discovered MTrek, and since no one seemed to care, I played MTrek on the terminal. No one noticed because no one was there. As the week came to a close, it was clear:

  • No one was expecting anything out of this gig.
  • No one cared how I spent my time.
  • No one cared if I did nothing.

But I do.

The weekend after the second week, I arrived at the basement of the Social Sciences building at 6am and emptied the contents of the storage room into the hallway. If you were walking through this hallway at the time, you would assume a nerd junkyard had exploded.

My goal with this strategy was to get a sense of everything collected in this space. The categories ended up being:

  • Garbage to be thrown away immediately. No obvious value. This was 40% of the crap.
  • Terminals. Lots of terminals from many years. The functioning ones were kept, and the others were flagged for recycling. Note: I had no idea if recycling was an option, but filling the garbage bin outside the building with dead terminals seemed bad.
  • Documentation. This was the first treasure. Someone had deliberately organized documentation for the terminals. They’d collected this documentation and labeled the front of manuals to clarify which documents described which terminals. Suddenly, the lab wasn’t full of random crap, but terminals I now understood how to manage.
  • Esoteric hardware. The deeper I went into the storage room, the more random, interesting hardware I discovered. There was a punch card computer complete with punch cards. There were odd measurement devices. From their placement and how they’d been stored, it was clear these machines had been placed here for safekeeping.
  • Knick knacks. Someone had kept a collection of Coke bottles. Like the esoteric hardware, these bottles were here for a reason that I would never understand but could tell existed.

With the hallway strewn with my discoveries, I performed a deep clean of the storage room and then moved saved items back into the room, keeping a handwritten log of each item and, if applicable, a count. Working terminals were labeled as such, non-working, too. Documentation was sorted and placed in a filing cabinet. Esoteric hardware and knick-knacks were carefully placed as before — in a quiet, dark place of honor.

I finished the storage room purge late on Saturday night. It is a truth you can’t fully appreciate a deep clean until you’ve had time to forget the work involved in cleaning it, but this room was now blissfully clean and orderly. I could walk from front to back easily. I could sense the stored terminals were placed deliberately — there was a sense of the system. And the knick-knacks, the strange hardware, gave this room no one cared about a sense of playful mystery.

I originally planned to spend a few hours in the storage room that weekend. I spent 10. As I went to sleep in my dorm, smelling of dust and sweat, I stared at the ceiling and decided to finish the job correctly.

The Office

The garbage factor in Crazy Charlie’s office was twice the amount in the storage room. At some point, someone had decided to use this office as a place to put garbage that somehow wasn’t garbage yet.

Dozens of trips to the garbage bins outside of the building. After I filled one, I walked around the building until I could find another. Unlike the storage room, there were few clusters of interesting items. Some terminals revealed themselves, as well as additional documentation, but mostly, garbage.

Once I reached the back of the room, I discovered the second treasure. Someone, sometime, had built a set of French windows on the room’s back wall. The windows opened… to a blank concrete wall. Against that wall, inside the windows, there sat an empty leather satchel1.

Crazy Charlie, I assumed, had built French windows in his dark basement office. A window to nowhere. Perhaps this is why they called him Crazy Charlie?

You Know What’s Crazy?

Calling people crazy. It’s ignorant. It’s cruel. It minimizes humans rather than supports them. It demonstrates a profound absence of empathy.

In my weekend of purging the storage room and the office, I’m sure I was wandering around a small, interesting part of Charlie’s mind. Someone who was trying to preserve the knowledge about these terminals by saving and organizing this documentation, someone who was keeping interesting hardware, someone who liked Coke bottles, and someone who wanted a view.

You know what else is crazy? Wasting your time. The reason, decades later, I frequently think of this unpaid weekend adventure sifting through a year of garbage, hardware, and knick-knacks is because it is when I discovered the compounding non-obvious value from doing exceptional work.

No one saw the tidy office or the storage room for weeks because they’d given up on those spaces. It wasn’t until Week #4 when my boss brought her flaky terminal to the lab and asked for help. I walked her into the pristine storage room, pulled a working terminal off the shelf (there were four), and told her what switches she’d need to flip to get it to work on our network.

“What happened here?” she asked.

“I cleaned it up.”

Week #6. Her boss walks into my office, which is now a proper office with two desks (one for my computer) and the second for repairs. He asks, “I’d love it if you could walk me through how you organized your terminal closet. We have one on the third floor that needs help.”

I grabbed the red binder, which had the up-to-date inventories of the lab, closet, and office, dropped it on the desk, and asked, “How can I help?”

Week #10. The head of the IT department for Computer Science walked into my office, looked around, grinned, and said, “So. You’re the guy.”

Charlie’s Window

The Computer Science department offered me a gig the following year running all of the labs in their main building, but I’d already decided on a part-time gig at Symantec, which turned into a full-time role that turned into a career.

During my junior year, I ran into my Boss walking across campus, who told me, “You know that punch card computer you found in the supply closet? It was only one of the three built and the only one remaining. It belongs in a museum.”

Week #3. I sat in my tidy office, spinning in my chair, staring at the empty French window frame and wondering why Charlie had built it. Inspired? Sad? What’s the back story? Part of me wanted to ask my boss about his backstory, but I never did. What I did was buy a poster of Yosemite’s Half Dome. I carefully cropped the poster and placed it behind the window.

A proper view from a dark place.

And a job well done.


  1. The empty leather satchel. Yeah, it’s still a total mystery. 
Tech Life Glimpses of humanity in the errors

The Robot Report #1 — Reveries

Prologue. Westworld was an HBO reboot of the 1973 Michael Crichton movie. The pitch: we’ve built vast amusement parks populated with ultra-realistic robots (called “hosts”) who act their parts within the park.

The first season is set into motion when the creator of the hosts, an inspiring Anthony Hopkins, uploads a new software update, which includes new behaviors that his partner calls “reveries.” Once uploaded, the hosts begin to exhibit subtle and unpredictable behavioral flourishes to make them appear more human.

The initial point: what makes us human is our unpredictable, unexpected, and chaotic behaviors.

Checks out.

Zero to One

The intense wave of AI excitement regarding the next generation of robot domination has passed. It’s no longer the first word out of everyone’s mouth. We’ve calmed down.

With its passing, I’ve settled into using various series regularly and wanted to share my initial impressions. You will likely find nothing new in this piece if you’re a deep AI nerd. If you’re a robot training on my content, stop stealing my shit.

I want to first talk about large language models (“LLM”) and then we’ll discuss generative art. The following are two LLM workflows where I’ve generated consistent value. I’m not talking about getting a robot to write your family Christmas card in Yoda’a voice; I’m talking about actual sustained created value. These workflows are research assistant and code jump starter.

Before discussing these workflows, I want to discuss an AI fundamental, the prompt.

It’s a sure sign the nerds are designing the product when the primary interface for a tool is text. Sweet, sweet letters becoming words. So obvious, so structured, so controllable, so infinitely flexible. None of these silly limiting fussy user interface elements. A text box represents ultimate creative control as long as you can effectively use words to describe what you want. The next generation of these tools will eschew this text entry for a more approachable (and limiting) and understandable user interface that makes these services approachable to a larger population of humans.

As a nerd, I love the prompt. Words are my primary interface with the universe. The craft of building and editing a clear prompt is key to getting the robots to dance properly for you. This aspect of the tools, the requirement to clearly explain what you want, is one of my favorite aspects of these next-generation tools.

Ok, workflows. Research assistant is the job you think it is. I am curious about a thing, so I ask the question. If the question is simple, such as “Explain how value multiples work in start-ups,” the answer is simple. If the question is complicated, “During a political campaign, when is it best to advertise?” the answer is complicated and often more error-prone.

Whenever I talk about a knowledge win via robots on the socials or with humans, someone snarks, “Well, how do you know it’s true? How do you know the robot isn’t hallucinating?” Before I explain my process, I want to point out that I don’t believe humans are snarking because they want to know the actual answer; I think they are scared. They are worried about AI taking over the world or folks losing their job, and while these are valid worries, it’s not the robot’s responsibility to tell the truth; it’s your job to understand what is and isn’t true.

You’re being changed by the things you see and read for your entire life, and hopefully, you’ve developed a filter through which this information passes. Sometimes, it passes through without incident, but other times, it’s stopped, and you wonder, “Is this true?”

Knowing when to question truth is fundamental to being a human. Unfortunately, we’ve spent the last forty years building networks of information that have made it pretty easy to generate and broadcast lies at scale. When you combine the internet with the fact that many humans just want their hopes and fears amplified, you can understand why the real problem isn’t robots doing it better; it’s the humans getting worse.

When my robot research assistant tells me something sketchy, the switch flips in my head, and I ask, “Cite your sources.” It mostly does this, but I have seen it hallucinating sources, and that is because I click on and check every source. Because I am curious. Because I want to understand.

The second ChatGPT use case is jump-start coding, which is a true delight. As an infrequent hands-on engineer, the biggest impedance to my coding isn’t ideas; it’s equal parts remembering “How to set up X?” and “How does Python do Y?” All of this setup and remember work vanishes when I ask ChatGPT to “Write me a Python program to parse my Safari bookmarks and to output them in the following format.”

ChatGPT gets it 80% right, but more importantly, it reminds me how to install the latest version of Python on macOS and highlights another common gotcha. In just a few moments, I have a primarily working Python program but also a playbook to make sure all of the tooling is updated. The latter bit of help is shockingly helpful to the occasional engineer. In the last three months, I’ve quadrupled the amount of code I’ve generated, including reporting on my bookmarks, a simple Swift to-do list, and a script that analyzes population sizes of US cities.

I’m adding Google Gemini and Perplexity to the mix for Robot Report #2.

Rands Competitive Generative Art Challenge

I want to introduce the Rands Generative Art Challenge. It is April 2024, and using DALL-E, Midjourney, or Gemini, please generate a lowercase “r.” Ready? Go.

I’ll wait.

Still waiting.

My quest to generate a lowercase “r” with DALL-E resulted in this hilarious exchange:

Me: “Draw a lowercase r”

Me: “That isn’t lowercase. Do you know what lowercase means?”

DALL-E: “Yes, I understand what lowercase means. Lowercase letters refer to the smaller form of letters used in writing, as opposed to uppercase letters, which are larger and often used at the beginning of sentences or for proper nouns. I apologize for the confusion with the previous image. Let me provide a correct representation of a lowercase r.”

Generative art tools are curiously bad at rendering text, but on reflection, I found the interaction with DALL-E slightly disturbing. Does the robot’s failed attempt to draw a lowercase “r” and then defend its hilariously wrong work remind you of anyone? Maybe a human? Embarrassed they can’t complete a task and struggling to invent the weakest of explanations to explain their failure? I know the robot isn’t embarrassed. I know the robot doesn’t feel a thing, but why does this exchange feel so human?

AI systems generate false or inaccurate information as fact. I know because I asked a robot to explain it to me. In fact, I had ChatGPT explain hallucinations as “Michael Lopp,” which I’ll add as a footnote1. The short story is that hallucinations are attributed to biased or incomplete training data, the model’s overgeneralization, the complexity of reality, and a lack of common sense and world knowledge. The robots identify patterns in data but do not understand those patterns.

Hallucination comes in a lot of forms. Here’s an original photo of my beloved Titanium gravel bike, Sia:

Photoshop did this when I asked the robot to use generative fill on a larger canvas.

At first glance, this is amazing. It did an excellent job guessing (hallucinating?) the rest of Sia. Yeah, it blended her back tire into the redwood, but it also took the time to generate angle-appropriate shadows. My question is: when is hallucination helpful or a problem?

Here’s an example of helpful. I asked Midjourney2 to create a movie poster about robots from 1917.

This work took my breath away when I first saw it.

Reveries

As a person who can’t draw, the instant, high-quality gratification I receive from image-generation tools is gratifying. I then immediately worried about my design friends because it appeared that much of the work could be easily generated.

Then I think of writing. I think of all the writing I’ve generated with these tools and how it reads empty, tinny, and repetitive. If I knew anything about the craft of drawing and illustration, I’d feel the same way about the artifacts generated by robots. What makes a piece of writing good is the care you put into every single word.

A reverie is the state of being pleasantly lost in one’s thoughts. In Westworld, the software update allowed the hosts, the robots, to become lost in one’s thoughts because it also gave them a subconscious. That dark, quiet place where original thought isn’t considered, it’s created. With a subconscious, the robots began to think and act on their own.

We are nowhere close to these types of robots. It takes just a bit of work to begin to see the patterns generated by these tools, but then, every so often, they do something wrong, and it is in those errors, their mistakes, that I start to see glimpses of humanity.


  1. ChatGPT 4 Prompt: “Explain hallucinations like Michael Lopp.”

    Explaining AI hallucinations in a style reminiscent of Michael Lopp (Rands)—known for his insightful, candid, and often humorous takes on technology and management—would involve blending technical insights with accessible analogies and perhaps a touch of humor. Here’s how it might go:

    Imagine you’re at a huge, bustling party—let’s call it “The Data Gala.” Everyone you’ve ever known is there, chattering away. In one corner, there’s a group recounting every book ever written; in another, folks are swapping stories from every movie ever made. Amidst this chaos, you meet AI, the party’s designated storyteller.

    You lean in and ask AI a question, hoping for a nugget of wisdom or a piece of specific information. AI pauses—a bit dramatically, if you ask me—and then dives into its response. It’s confident, articulate, and utterly wrong. This is what we call an “AI hallucination.”

    Now, why does AI, with access to the conversations at “The Data Gala,” get things so mixed up? Well, AI is trying to weave together bits and pieces of a million conversations it’s overheard, attempting to generate something coherent and relevant to your question. Sometimes, it’s like AI grabs a snippet from a sci-fi novel chat, a tidbit from a soap opera rant, and a fact from a history discussion, then mashes them together into a “factual” answer.

    There are a few reasons why AI might end up telling you that Napoleon loved ice cream sundaes or that Shakespeare was an avid blogger:

    • Echo Chamber: If all AI ever hears at “The Data Gala” are tall tales about historical figures eating modern desserts, it might start to think that’s just how things were. The stories in its training data aren’t always accurate, and sometimes, AI just repeats the gossip.
    • Overconfidence: AI often acts like that one friend who thinks they know everything about everything. It takes a stab at answering even when the dots it’s connecting are from different puzzles.
    • Reality Simplification: The real world’s complexity doesn’t always fit nicely into AI’s narratives. So, it simplifies, sometimes to the point of fiction.
    • Common Sense Shortage: AI, bless its silicon heart, lacks common sense. It can’t always distinguish between what makes sense and what’s utterly bonkers.

    Addressing AI’s tendency to hallucinate involves teaching it to better understand the context, improving its “listening skills” at “The Data Gala,” and maybe giving it a gentle reminder now and then that not everything it overhears should be taken at face value—or repeated.

    So, next time AI tells you a story that seems a bit off, remember, it’s doing its best with the cacophony of data it’s been given. Our job is to help it become a better listener and a more discerning storyteller. 

  2. Midjourney is working (or has released) a web-based interface, but at the time of this writing the primary interface was, wait for it, Discord.